It appears that the javascript getServerUrl() function will become deprecated with the release of Update Rollup 12 for CRM Dynamics 2011 (imminent). As far as I can tell, from that point forward there will be a new function called getClientUrl() that should be used instead.
No doubt the reason for the deprecation is due to the fact that the getServerUrl() function cannot always be relied upon to return the correct server context as described in an earlier post (and therefore the workaround solution described in that post should no longer be necessary).
We'll have to get back to this to confirm once the rollup has been officially released.
The intention of this blog is to focus on the business application of Microsoft CRM and its surrounding ecosystem. In doing so, whenever discussing a topic I will endeavor to avoid presenting dry facts but rather to relate it to the practical application and/or impact it might have on the business, the pros, cons, best practices etc. The correct way of thinking is paramount when confronting a business challenge and this is what I hope to bring to the table.
Wednesday, December 26, 2012
Form Query String Parameter Tool
The Dynamics team recently released a new utility called the Form Query String Parameter Tool. Essentially it's a useful little utility that can be used to generate the syntax of a create form URL passing in parameters to default field values when the new form is opened.
Having said that, I believe that in in most cases using the URL parameter passing approach for defaulting values on the create form should not be necessary. The rest of this post will explain this point of view.
First of all, the out of the box "field mapping" approach - whereby field values from the currently open parent entity are mapped to the child entity form being opened - should be used where ever possible. This covers scenarios where the default values are static regardless of the form "context" and the field being mapped is in fact "mappable".
If the default values are dependent on the form "context" (e.g. new accounts where type = "customer" should have different defaults than cases where type = "vendor") or if the field being mapped is not "mappable" then your next best bet is using javascript on the form load event to set the defaults. Using javascript you can set all the appropriate fields as long as you have a field populated on the form in order to make the necessary branching default logic (the form "context" field). The form context field may be set using either standard field mapping, a web service API call (as explain below) or via URL parameters. Once obtained the javascript default logic can take over.
Using the URL parameters approach would therefore only seem to be necessary when you don't have a form "context" field on the form that is being opened. However this is not necessarily the case either since you can also leverage the web service calls (RESTful or SOAP) in order to retrieve the "context" from the parent field in order to perform the necessary branch logic.
Therefore it would seem that using form parameters to set default values would be necessary to only a fairly limited set of scenarios. This would be a scenario where all 3 conditions listed below are true:
Finally, even in the remaining few scenarios that do require the URL mapping approach, it is really only necessary to pass through a single URL parameter - that will set the context and javascript can subsequently take over for the remaining default logic.
Therefore, when all is said and done, while it's always nice to see new tools being developed to facilitate the customization effort, I do not see myself using this particular one all too frequently.
Perhaps there are scenarios that I'm not considering? If I encounter them I'll be sure to dish.
One scenario encountered is with differentiating between multiple 1:N field maps for the same two entities as described in this post.
Having said that, I believe that in in most cases using the URL parameter passing approach for defaulting values on the create form should not be necessary. The rest of this post will explain this point of view.
First of all, the out of the box "field mapping" approach - whereby field values from the currently open parent entity are mapped to the child entity form being opened - should be used where ever possible. This covers scenarios where the default values are static regardless of the form "context" and the field being mapped is in fact "mappable".
If the default values are dependent on the form "context" (e.g. new accounts where type = "customer" should have different defaults than cases where type = "vendor") or if the field being mapped is not "mappable" then your next best bet is using javascript on the form load event to set the defaults. Using javascript you can set all the appropriate fields as long as you have a field populated on the form in order to make the necessary branching default logic (the form "context" field). The form context field may be set using either standard field mapping, a web service API call (as explain below) or via URL parameters. Once obtained the javascript default logic can take over.
Using the URL parameters approach would therefore only seem to be necessary when you don't have a form "context" field on the form that is being opened. However this is not necessarily the case either since you can also leverage the web service calls (RESTful or SOAP) in order to retrieve the "context" from the parent field in order to perform the necessary branch logic.
Therefore it would seem that using form parameters to set default values would be necessary to only a fairly limited set of scenarios. This would be a scenario where all 3 conditions listed below are true:
- The fields cannot be defaulted via standard parent/child field mapping
- The form does not contain a form "context" field on which to base javascript branching logic
- A standalone form that is not linked to another form from which the "context" can be retrieved using web services
Finally, even in the remaining few scenarios that do require the URL mapping approach, it is really only necessary to pass through a single URL parameter - that will set the context and javascript can subsequently take over for the remaining default logic.
Therefore, when all is said and done, while it's always nice to see new tools being developed to facilitate the customization effort, I do not see myself using this particular one all too frequently.
Perhaps there are scenarios that I'm not considering? If I encounter them I'll be sure to dish.
One scenario encountered is with differentiating between multiple 1:N field maps for the same two entities as described in this post.
Thursday, December 20, 2012
Simplifying Navigation: Add to Favorites
This post is another in the series on Simplifying Navigation - it is somewhat of an "oldie" but definitely a "goodie". And I have also verified that the same issue and workaround exists using Outlook 2013.
In short, the issue is that the CRM area within the Outlook Client is somewhat buried. For example, if you wish to navigate to the Accounts, it will involve at least 3 clicks (if this area was not previously opened):
In addition to that, if you are more than likely to access a particular entity from your CRM data it would be helpful if you could separate it from the rest of the group.
And fortunately you can by simply dragging and dropping the CRM folder to your Outlook "Favorites" folder which will you give single click access to your CRM data. Much better.
As an aside, it appears that in Outlook 2013 the UI has been simplified and the icons for all the folders have been removed... including the CRM icons. Not sure if this is a long term feature or something that will be tweaked in future Office patches (I personally haven't yet decided whether I prefer it this way or not).
The issue that you may encounter is that the aforementioned drag and drop capability might not be working. And if that is the case, you need to enable this feature by running through the steps in this KB article. Or you can just download this file, unzip and double click to deploy it in your environment (this will add a single registry setting called DisabledSolutionsModule under your MSCRMClient key).
Another option that allows for a little more organization of your CRM links is to use the Outlook "shortcuts" option. In order to make this effective the first thing you should do is go to the Navigation Options and move the shortcuts up in the list so it appears in plain sight as shown.
You can then add new shortcuts to the CRM folders of your preference:
And now you can click on the shortcut to navigate to your CRM data:
In short, the issue is that the CRM area within the Outlook Client is somewhat buried. For example, if you wish to navigate to the Accounts, it will involve at least 3 clicks (if this area was not previously opened):
In addition to that, if you are more than likely to access a particular entity from your CRM data it would be helpful if you could separate it from the rest of the group.
And fortunately you can by simply dragging and dropping the CRM folder to your Outlook "Favorites" folder which will you give single click access to your CRM data. Much better.
The issue that you may encounter is that the aforementioned drag and drop capability might not be working. And if that is the case, you need to enable this feature by running through the steps in this KB article. Or you can just download this file, unzip and double click to deploy it in your environment (this will add a single registry setting called DisabledSolutionsModule under your MSCRMClient key).
Another option that allows for a little more organization of your CRM links is to use the Outlook "shortcuts" option. In order to make this effective the first thing you should do is go to the Navigation Options and move the shortcuts up in the list so it appears in plain sight as shown.
You can then add new shortcuts to the CRM folders of your preference:
And now you can click on the shortcut to navigate to your CRM data:
Thursday, December 6, 2012
Data Import Options
When discussing data imports to CRM there are 2 distinct scenarios:
Data Migration
Data Migration generally refers to a large scale conversion effort and is typically performed as part of a CRM Go Live effort where data is migrated from the legacy system and is usually quite an involved effort. This exercise is technical in nature, typically performed by IT professionals and requires a great attention to detail. Correspondingly the migration logic is also usually quite involved. Meaning that there can be quite a bit of data transformation as part of the migration exercise. And if migrating from another system, you typically want to be able to connect to the data in the legacy database rather than extracting the data to an intermediate CSV file. So as a general rule of thumb when doing this type of data migration you typically want to be using a tool like Scribe to make it happen.
Importing Data
Importing Data generally refers to a more specific one-time or ongoing requirement to import data into a live CRM environment. For example, the need to import new leads into the CRM database. Typically this is a more straight-forward "one-to-one" exercise (i.e. no data transformation required) and if this is the case, the out of the box "Data Import Wizard" can be used for this function.
One thing to note is that while the "Data Import Wizard" has advanced by leaps and bounds, it still has one chief limitation - it only can import new records. If you're looking to use the wizard to update existing records, I'm afraid you're fresh out of luck. And if that is a requirement you'll once again need to look for a 3rd party tool such as Scribe.
For an excellent general overview of the features and functions of the Data Import Wizard please refer to this post. At the end of the post it lists a number of 3rd party tools that can be used to fill the gap in functionality of the Data Import Wizard should you encounter it. Although none of these products have what I would term a "no-brainer" price. Below is a rough comparison in pricing of the various tools on the market (as of this writing):
The following tools do not seem to fulfill the requirement:
Data Migration
Data Migration generally refers to a large scale conversion effort and is typically performed as part of a CRM Go Live effort where data is migrated from the legacy system and is usually quite an involved effort. This exercise is technical in nature, typically performed by IT professionals and requires a great attention to detail. Correspondingly the migration logic is also usually quite involved. Meaning that there can be quite a bit of data transformation as part of the migration exercise. And if migrating from another system, you typically want to be able to connect to the data in the legacy database rather than extracting the data to an intermediate CSV file. So as a general rule of thumb when doing this type of data migration you typically want to be using a tool like Scribe to make it happen.
Importing Data
Importing Data generally refers to a more specific one-time or ongoing requirement to import data into a live CRM environment. For example, the need to import new leads into the CRM database. Typically this is a more straight-forward "one-to-one" exercise (i.e. no data transformation required) and if this is the case, the out of the box "Data Import Wizard" can be used for this function.
One thing to note is that while the "Data Import Wizard" has advanced by leaps and bounds, it still has one chief limitation - it only can import new records. If you're looking to use the wizard to update existing records, I'm afraid you're fresh out of luck. And if that is a requirement you'll once again need to look for a 3rd party tool such as Scribe.
For an excellent general overview of the features and functions of the Data Import Wizard please refer to this post. At the end of the post it lists a number of 3rd party tools that can be used to fill the gap in functionality of the Data Import Wizard should you encounter it. Although none of these products have what I would term a "no-brainer" price. Below is a rough comparison in pricing of the various tools on the market (as of this writing):
- Scribe:
- $3000: 15 user license
- $5500: 100 user license
- $1900: 60 day migration license
- Inaport:
- $1799: standard (should be sufficient for import function)
- $3499: professional
- $1195: 30 day migration "professional" version which is apples to apples comparison for the Scribe migrate license)
- Import Manager (best option if end user interface is required)
- ~$2500
- Import Tool (attractive pricing)
- ~$1300: Full Version
- ~$325: 60 day migration license
The following tools don't seem to be realistic candidates based on their price point:
- Starfish
- $4800/year: Basic Integration
- $1495: 60 day migration license
- eOne SmartConnect
- $4500 (not sure if this is a one time or per year fee)
- QuickBix
- $7000: 100 user license ($70/user)
- Jitterbit
- $800/month: Standard Edition
- $2000/month: Professional Edition
- $4000/month: Enterprise Edition
The following tools do not seem to fulfill the requirement:
- CRM Migrate
- Tool specifically built for migrating SalesForce to CRM
- CRM Sync:
- Very little information and no pricing for this tool
Tuesday, November 6, 2012
object doesn't support property or method '$2b'
We encountered this strange error in a CRM online environment. The symptoms were as follows:
- Only occurred on the Outlook client (not in IE)
- Occurred even when form jscript was disabled indicating issue was not with the script
- Appeared when closing out the contact form
The error message looked as follows:
After a bit of scratching around the solution presented in the following forum discussion seemed to work. Or in short as follows:
- Close Outlook and IE
- Open IE and delete temporary files (Make sure to uncheck “preserve favorites website data”)
- Now if you open outlook, you might see that On the Grid's Ribbon is loading... WAIT for it to load
- Open the Account, Lead, Appointment
Cannot add more picklist or bit fields
I came across this issue today in a 4.0 environment that still uses SQL 2005. Time to upgrade, huh? Happily that's soon to be the case - so this hopefully is just for the record books.
Anyway the symptom was that while the user was able to add nvarchar fields to the accounts entity, it would error out each time the user wanted to add a new bit field (or for that matter a picklist) it would fail with an error message.
Using the trace tool it took pretty quick to identify the underlying cause. The following exception appeared in the trace file:
So what is causing the "too many table names" in query?
Simple. Every time an attribute is added to an entity, the entity views (regular entity view and filtered view) are updated. The filtered view in particular joins to the StringMap view for picklists and bit fields to obtain the corresponding friendly "name" field. For example, for an account field called "new_flag" it will join to StringMap view to create a new virtual field in the FilteredAccount called "new_flagname".
One only needed to look up a little higher in the trace file to see the view being constructed with many joins for the bit and picklist fields. Such that if the number of these two types of field combined exceeds 256 (or thereabouts given other joins that may already exist) it will cause this join limitation to occur. Which is only a limitation on SQL 2005. This is no longer a limitation from SQL 2008 and up.
The options for resolving this issue are therefore:
Anyway the symptom was that while the user was able to add nvarchar fields to the accounts entity, it would error out each time the user wanted to add a new bit field (or for that matter a picklist) it would fail with an error message.
Using the trace tool it took pretty quick to identify the underlying cause. The following exception appeared in the trace file:
Exception: System.Data.SqlClient.SqlException: Too many table names in the query. The maximum allowable is 256.
So what is causing the "too many table names" in query?
Simple. Every time an attribute is added to an entity, the entity views (regular entity view and filtered view) are updated. The filtered view in particular joins to the StringMap view for picklists and bit fields to obtain the corresponding friendly "name" field. For example, for an account field called "new_flag" it will join to StringMap view to create a new virtual field in the FilteredAccount called "new_flagname".
One only needed to look up a little higher in the trace file to see the view being constructed with many joins for the bit and picklist fields. Such that if the number of these two types of field combined exceeds 256 (or thereabouts given other joins that may already exist) it will cause this join limitation to occur. Which is only a limitation on SQL 2005. This is no longer a limitation from SQL 2008 and up.
The options for resolving this issue are therefore:
- Upgrade. Really. The technology you are using is around 7 years old (at least) and there does come a point where the compelling reason to upgrade is just the combined benefits of all the various enhancements that have been introduced over time (that is, if you cannot find a single compelling reason).
- Clean up your system and review whether you actually do need all those fields in your environment. This actually is relevant whether you upgrade or not. I'm a big proponent of keeping the environment as clean as possible as my first post on this blog will attest to (disclosure: the above environment is extended directly by the client as we like to encourage our clients to do so they are not reliant on us for every little change required).
Friday, November 2, 2012
Passing Execution Context to Onchange Events
In a previous posting I provided some jscript that can be used to validate phone number formats. In order to invoke the validation for a phone number field I mentioned that you need to create an "on change" event that would pass in the attribute name and attribute description i.e.:
I thought this would provide a good example for demonstrating the ability to use the execution context because you can obtain the field name and label (by extension) via the execution context which on the surface is a good thing since you avoid hard-coding as in the example above (and you can apply this to all phone number fields in the system). And therefore in theory you could simplify that example as follows:
The only difference is that instead of the "phone" and "phoneDesc" parameters being passed into the validation function, the execution context is instead passed in and the phone attribute and its corresponding phoneDesc label are obtained via the context as local variables. The rest stays the same.
In order for this to work, you would update the "on change" event to call the PhoneNumberValidation function directly and check off the "pass execution context as first parameter" as shown:
function Attribute_OnChange() {
PhoneNumberValidation("attributeName", "attributeDescription");
}
I thought this would provide a good example for demonstrating the ability to use the execution context because you can obtain the field name and label (by extension) via the execution context which on the surface is a good thing since you avoid hard-coding as in the example above (and you can apply this to all phone number fields in the system). And therefore in theory you could simplify that example as follows:
function PhoneNumberValidation(context) {
var phone = context.getEventSource().getName();
var phoneDesc = Xrm.Page.getControl(context.getEventSource().getName()).getLabel();
var ret = true;
var phone1 = Xrm.Page.getAttribute(phone).getValue();
var phone2 = phone1;
if (phone1 == null)
return true;
// First trim the phone number
var stripPhone = phone1.replace(/[^0-9]/g, '');
if ( stripPhone.length < 10 ) {
alert("The " + phoneDesc + " you entered must be at 10 digits. Please correct the entry.");
Xrm.Page.ui.controls.get(phone).setFocus();
ret = false;
} else {
if (stripPhone.length == 10) {
phone2 = "(" + stripPhone.substring(0,3) + ") " + stripPhone.substring(3,6) + "-" + stripPhone.substring(6,10);
} else {
phone2 = stripPhone;
}
}
Xrm.Page.getAttribute(phone).setValue(phone2);
return ret;
}
The only difference is that instead of the "phone" and "phoneDesc" parameters being passed into the validation function, the execution context is instead passed in and the phone attribute and its corresponding phoneDesc label are obtained via the context as local variables. The rest stays the same.
In order for this to work, you would update the "on change" event to call the PhoneNumberValidation function directly and check off the "pass execution context as first parameter" as shown:
So that's the theory and I think it demonstrates quite nicely how the execution context can be used.
Having said that, in this particular example, I prefer using the explicit technique referenced in the original posting. The reason for this is because the on change event in this validation example (and probably relevant for most data validation cases) has a dual function -
The first is to provide the necessary validation as part of the field on change event as the example above will accomplish quite well.
The second is to be called from the on save event to make sure that even if users ignore the message from the on change event they will not be able to save the form via the validation from the on save event (the PhoneNumberValidation function returns a true or false value to indicate whether validation was passed or not). And when the function is called from the on change event the specific field context is not going to be there anyway making it necessary to put in some additional logic in order to handle correctly. Therefore what you gain from using the execution context in this example is likely to be offset by requirements for special handling required by the on save event.
Subscribe to:
Comments (Atom)








