Tuesday, April 30, 2013

Contact Address Solution

In my previous post I provided some background with regards to the challenge of managing account and contact addresses in CRM. I also provided a solution for handling a simple scenario where the contact's account only ever has a single address. But the reality in most cases is more involved than that.

We will be walking through the scenario described at the end of my previous post i.e.
It would therefore be better to stick to a single account with multiple addresses and allow for the contact records to be associated to one of these account addresses. In the next post, I will present an elegant approach to handling this requirement.
The objective of this solution is to provide the flexibility of a custom lookup solution while not compromising any features/functions in the system by ensuring that the default (out of the box) address fields are populated with the address information.

This requirement covers a number of configuration scenarios and therefore may be useful as a good overall "configuration" tutorial. Topics referenced:

  • Default vs. Custom entities - when to resort to custom
  • Use case for the "Related Records Filtering" configuration option
  • Distribute Workflow
  • Plugin requirement

First of all, we need to create a new address entity. The reason we do so is because the ability to leverage the default More Addresses entity is limited. Specifically, the system does not allow custom relationships to be created against More Addresses which is a key requirement to the solution we are trying to implement. So first of all, we create a custom entity like the one shown and in order to eliminate user confusion we rename and remove the default More Addresses from the account and contact forms.

 Next we create an N:1 relationship with Account - this essentially replaces the More Addresses link in the account entity.

... and a 1:N relationship with Contact - this is going to allow for an address selector at the contact level.

We now add the relationship field created in the step above to the contact form. In the "Related Records Filtering" section we need to filter it to the Parent Account. This will limit the addresses shown in the lookup dialog to addresses defined at the parent account level.

The next thing we need to do is to add some jscript to the form. The objective here is to retrieve the address fields from the address which has been selected and copy it to the corresponding address fields on the contact record. The jscript below will perform the following logic:

  • Copy/Clear Address fields based on address selected from lookup using the REST API (note this use the retrieveEntityByID function described in this post)
  • Make fields read only if address is selected via the "picker" (the idea here is that you can enter an ad-hoc address if you want at the contact level, but if you select from one of the lookup values then the address values will be controlled by the official parent address records).

function AddressPicker_OnChange() {

 var entity = "snt_address";
 var entityid = Xrm.Page.getAttribute("snt_addressid").getValue();
 var fields = "?$select=snt_Street1,snt_Street2,snt_Street3,snt_City,snt_State,snt_Zip,snt_Country"
 if (entityid != null) {
  entityData = retrieveEntityById(entity,entityid[0].id,fields,UpdateAddressFields);
 } else {


function UpdateAddressFields(entityData) {
 if (entityData != null) {  

function AddressRender() {
  -- Add function to form onload function
  var disabled;
  if (Xrm.Page.getAttribute("snt_addressid").getValue() != null)
   disabled = true;
   disabled = false;


For the sake of illustration, after the above jscript is implemented, when an address record is selected the fields are copied as shown:

Now we need to implement logic that will keep addresses in sync. For example, if the address shown in the screenshot is changed and there are 5 contacts that are linked to this address, then we want to make sure that the address fields are propagated to all 5 contact records. This is fairly easy to accomplish using the distribute workflow. The screenshots below show the 2 workflows required in order to configure this option.

The last piece of the puzzle requires a plugin. Basically at this juncture we can define addresses against the account entity that will appear in the lookup for contacts tied to this account:

But what about the address defined on the account itself? This in essence is the Primary address of the account and we also want it available for selection for the contacts. Therefore the plugin logic required here is to automatically maintain the PRIMARY address as shown above based on the values entered on the account form (as additional validation, you should also define jscript on the address entity to make the fields read only when the address name = PRIMARY and prevent manually keying in address with this name i.e. reserve PRIMARY for the plugin only).

The end result? A lookup on the contact form that allows you to select from the specified account addresses.

Tuesday, April 23, 2013

Handling Account and Contact Addresses

In my experience one of the most challenging issues to deal with in a CRM implementation is dealing with account and contact addresses. As is well known, the customer in CRM can either be the account or the contact. In a B2C business model, this is not likely to be a big deal as your contact record becomes the center of your universe and therefore all address information should likewise be on the contact record.

In a B2B business model, your customer is represented by the account records and the contacts (usually employees) belonging to these accounts. The challenge in this case is that you have address information on both account and contact records and the storage is neither efficient nor easy to maintain. At least out of the box. Before delving into the justifications for the above statement let's first stipulate that having the address information stored on the contact record in the default address fields (address1_line1 etc.) is important. The importance is because many default functions rely on this information. For example:

  • Syncing contacts to Outlook - if you don't have the address information stored on the contact, the address information will be missing on the synced contact
  • Mail merge - if you want to pull in contact address details while creating your merge documents, these will need to exist on the record

The reason why the storage of address information is not efficient is because in most cases, contacts of an account have the same address of the parent account (for example, generally with employees the address information for that employee is going to be the business address not personal address). Therefore storing it redundantly in 2 places is, well, redundant.

The reason why it is not easy to maintain is because it needs to be updated in multiple places. For example, let's assume you have an account with 5 contacts all with the same address information. If the business address changes then all 5 linked contact records should be updated as well. This of course can be done manually and perhaps even "optimized" by using tools such as the bulk edit tool (in which case the above scenario results in 2 manual steps) - however I am firmly of the opinion that if such updates are not automated the data will inevitably fall out of sync. Call me a skeptic but users can generally not be relied on to apply such manual procedures with rigor.

Fortunately if the scenario that you have resembles the one described above you can apply tools such as the Copy Utility or the Distribute Workflow to automate the updates. Both of these tools can be configured to copy down updates made from the account record to the contact records beneath them.

Frequently however companies have more than one office and the employees will be spread among these office. So simply applying a straight down copy will not work. A simple way to approach this is to use the account hierarchy whereby each office represents a separate account record with the relevant employees linked to each of the offices (with each having a separate address that will feed to the contact records). I would however argue that creating a separate account record solely to deal with the address issue is a little bit of overkill. And it results in disjointed information such that in order to "see" everything related to an account you need to view all account records and/or write custom reports in order to pull all the information together. Essentially you are creating "artificial" accounts in order to address the problem and I am not a fan of that. I believe that the account hierarchy should be used when the accounting requirements (reporting etc.) necessitate this. 

Aside from this, there are also other practical reasons where this approach can introduce problems. For example, if you have a mail merge requirement (or similar) whereby you want to pull the account name into the body of the letter, email etc. - the customer may not be too happy with the fact that letter head will go out from "ACME Medford" instead of just plain old "ACME". On the other hand, naming all these accounts with the same name of "ACME" will result in "duplicates" which will confound end users.

It would therefore be better to stick to a single account with multiple addresses and allow for the contact records to be associated to one of these account addresses. In the next post, I present an elegant approach to handling this requirement.

Thursday, April 11, 2013

Searching Workflow Processes

I was recently upgrading a 4.0 environment to 2011 and in the process I needed to identify cases where workflow plugins were used to ensure that they were compatible with the upgraded version. The problem is how do you go about finding which workflows use a specific workflow plugin?

The answer is fairly straight-forward and is useful not only for upgrade scenarios but in other cases where you might be searching for a particular workflow that has a specific condition, action, or workflow plugin. That is, you can use this as a general purpose mechanism for finding workflows with a particular clause.

Let's start with identifying workflow plugins. For example, let's say that I want to find workflows that use the following workflow plugins:

The following query will find cases where the GetDate method is used. You could similarly search for the other methods by simply replacing the search clause.

select distinct  name from FilteredWorkflow
where statecodename = 'Published'
and activities like  '%DateUtilities%'
and activities like  '%GetDate%'

This returns with a result of "App Server Install". And sure enough if I open up that workflow in CRM, I see that the workflow plugin is called.

Alternatively let's say I was searching for another clause within the workflow e.g. 

select distinct  name from FilteredWorkflow
where statecodename = 'Published'
and activities like  '%region= Australia%'

Once again this also returns the "App Server Install" workflow and if I scroll a little further down I will see that also in the workflow:

By now you should be getting the idea. But interestingly if you tried the same search in a CRM 2011 environment you will not get any results returned. To make it work in 2011, you will need to tweak the queries very slightly replacing the "Activities" column with the "Xaml" column. The following will now work in your CRM 2011 environment:

select distinct  name from FilteredWorkflow
where statecodename = 'Published'
and Xaml like  '%DateUtilities%'
and Xaml like  '%GetDate%'

select distinct  name from FilteredWorkflow
where statecodename = 'Published'
and Xaml like  '%region= Australia%'

Wednesday, April 10, 2013

Spell Checking Options

I came across this topic in my email archives. I think it's still a relevant topic so I thought I'd post it. It deals with options for spell checking within Dynamics CRM.

One of the simplest options of course is to use the spell checkers that come built into the various add-on toolbars such as the Google toolbar.

More sophisticated options employ the use of a spell checker API where the experience is similar to spell checking in applications such as Microsoft Word.I have implemented 2 such versions of the spell checker.

The first one powered by is a paid for solution ($250 per year for up to 10000 sessions and then $0.025 for each additional sessions - see web site for additional pricing options). This tool is very intuitive and will be familiar to anyone who has used the Microsoft Word spell checker solution.

The second option was to use a free product called NetSpell – this is deployed on the server entirely i.e. the check occurs against a dictionary defined on the server. This tool is also fairly intuitive however the way in which the spell checker works is not quite as comprehensive as that as which I think is in evidence by comparing the screenshots.

In summary, below  are the pros/cons of each approach: Pros

  • Quite a bit more comprehensive in terms of functionality including an easier to use interface as well as grammar and thesaurus options
  • Very intuitive
  • Allows for custom dictionary to be defined
  • Is a supported product and will be maintained over time presumably with incremental improvements, dictionary term additions, etc. Cons:

  • Is not free. $250 per 10000 sessions. To get a true cost it is necessary to determine the number of users who will be using this on a daily basis multiplied by the number of times they are likely to use it per day e.g. the formula would look something like: #users*#dailysessions*200 (200 being roughly the number of work days per year)
  • Is an online solution although there is also a licensed downloadable version for a not insignificant cost

NetSpell Pros

  • Fairly comprehensive
  • Fairly intuitive
  • Allows for custom dictionary to be defined
  • On premise solution so not reliant on network performance
  • It is free

NetSpell Cons

  • Is not a supported product in case of product issues
  • Dictionary is not auto maintained
  • User interface not as slick as that of 

Thursday, April 4, 2013


Although the title of this post is "TDE and CRM" the reality is that it could be titled "TDE and SharePoint" or "TDE and Any App Running on SQL Server". The point is that the CRM database like any other application running on SQL can be encrypted using TDE. There really are no specific settings as far as the requirements for setting up TDE for CRM.

TDE stands for Transparent Data Encryption and became available in SQL 2008 R2 Enterprise. TDE essentially protects your "data at rest" i.e. it protects your database from being stolen. As in backing up the database and restoring it in some other location where the contents can be analyzed via SQL.

Essentially when you apply TDE you are applying a certificate used to encrypt your data to the back-end database files (it only encrypts to the database files, when you access a TDE database via SQL there is no difference to the usual experience). You will only be able to restore the database to another database server if that server has the same TDE certificate applied. Otherwise you will be prevented from restoring the database - you will receive the "cannot find certificate with thumbprint" error:

The backed up database file will be similarly unintelligible.

As the name itself implies, TDE is transparent. Meaning this is configuration is applied on the database itself and any applications with back-end SQL databases running on top of it will be oblivious to the TDE. The only impact will be a slightly increased level of CPU utilization on the database server as it handles the data encryption for the SQL transactions.

In light of this, it is difficult to not recommend using TDE. It is simple to apply, it is completely transparent to applications, it protects your data... with the only downside being higher CPU utilization which for most databases won't make a difference (assuming the old adage "use it or lose it") and easily compensated in cases where the additional churn pushes your CPU over the edge.

Below are some scripts for setting up TDE on a SQL 2008 R2 Enterprise server:

/* If you wish to copy the certificate from an old server you will need to back it up */
--Backup the certificate from the 4.0 database server:

USE master
ENCRYPTION BY PASSWORD = 'EncryptionPassword' )

/* On a new server you will need to create a certificate using the backed up one from the old server.
   This procedure assumes you copied the backed up certificate to C:\ root on the new server

-- Create a master key. NB: Record the password for future reference.
USE master

-- Backup the master key. NB: Store the backup file and record the password for future reference.
USE master;
ENCRYPTION BY PASSWORD = 'MasterKeyBackupFilePassword'

-- Create a certificate by importing the certificate backed up in the first step.
USE master
    DECRYPTION BY PASSWORD = 'EncryptionPassword')

-- Backup the certificate. If you’re using the imported certificate this step is optional since the file 
-- you used to create the new certificate is already the backup file so you’ll just be recreating this file. 
-- NB: Store the backup file and record the password for future reference.

ENCRYPTION BY PASSWORD = 'EncryptionPassword' )

-- For each database you wish to restore to from a TDE enabled database you will need to apply the certificate
-- and enable encryption.
Encryption By Server Certificate TDECRM