Customer Experience Improvement Program dialog

If you create a script that runs AX client, e.g. to compile CIL, you might find that it gets stuck immediately after starting AX. It’s typically because your build user is asked to join the Customer Experience Improvement Program. One option is to log as the build user and choose yes or no. But if the account doesn’t have permissions for interactive login, or you simply look for an easier way, you can set it directly in SysUserInfo.SqmEnabled field.

I remembered this problem, but I couldn’t remember at all where the options is saved. From now, I can always find it here. :)

Test Data Transfer Tool: Getting errors from log

I imported data to AX with the Test Data Transfer Tool and it told me that some errors occured. The log file is quite large, so I asked myself what’s the easiest way to find these errors. This is my approach, using a very simple Powershell script:

[xml]$dplog = Get-Content C:\Windows\System32\dplog.xml
$dplog.root.item | ? Status -eq "Failed"

Note that this is for Powershell 3; you would have to change it to something like this if you still use Powershell 2:

[xml]$dplog = Get-Content C:\Windows\System32\dplog.xml
$dplog.root.item | ? {$_.Status -eq "Failed"}

This is how the output looked like in my case:

status      : Failed
message     : One or more indexes were disabled on table TableXYZ to allow the data to import.
              Use the following SQL to enable the indexes once you've fixed the data:
                  ALTER INDEX ALL ON [TableXYZ] REBUILD
              The original index violation message is:
              Cannot insert duplicate key row in object 'dbo.TableXYZ' with unique index
              'I_104274XYZIDX'. The duplicate key value is (5637144576, 196, , ).
              The statement has been terminated.
direction   : Import
action      : Overwrite
database    : TestAX
table       : TableXYZ
targetTable : TableXYZ
folder      : C:\TestData

Tips for CIL debugging: Collection classes

In some cases, you can make your debugging much easier if you debug CIL instead of the original X++ code. Working with collection classes (lists, maps, etc) is such a case.

For example, the following picture shows a map (mapping customer IDs to CustTable records, e.g. for caching) as displayed by the AX debugger:

MapInAX

If you do the same thing in CIL / Visual Studio debugger, it looks almost the same:

MapInVS

But there is one huge difference – unlike in AX, you can actually open the content of the collection. You can easily see the number of elements and you can open any of them and see the full object graph (such as fields of CustTable records, in our case).

MapContent

The inability to see the content of collections in AX debugger can be quite annoying. You can’t simply see what’s inside; you need some code that explicitly iterates the collection. Debugging the CIL code instead of X++ can be much easier.

By the way, you can dig into many other things that looks atomic in X++. For example, utcdatetime is a primitive type in X++, but it’s a struct in CIL and you can see many properties unavailable in X++:

CreatedDateTime

Tips for CIL debugging: No variables displayed

Because AX 2012 sometimes executes CIL generated from X++ instead of X++ itself, debugging must be done in a debugger that understands CIL and it almost always means Visual Studio. AX developers sometimes complain about the need of switching between two debuggers, that loading of symbols for debugging is slow and so on. Although these concerns are valid for the time being (things will change in AX7), using Visual Studio for debugging AX code also offers many options that you don’t get with the X++ debugger. It’s sometimes even worth forcing code to run in CIL just to be able to use the VS debugger.

Before actually going to the fancy stuff I want to show, let’s first address a problem that you might run into.

Imagine that you’re debugging a class with an instance variable but you don’t see anything in the list of variables:

EmptyLocals

It’s actually a quite common problem, but I’ve never tried to investigate why it happens, because there is a simple solution: add the variable to the Watch window.

If you see the variable in code, right-click it and choose Add Watch.

AddWatch

You can also type variable names into the Watch window:

TypingToWatch

This not only adds variables to the Watch windows, but they’ll finally appear among Locals as well.

Variables

Objects ignored by Code Upgrade Tool

When testing my new rule for code upgrade tool, I found that certain objects are skipped and rules are not checked for them. That makes the tool significantly less useful, because you can’t be sure that it found everything.

This blog post explains why it happens, but unfortunately doesn’t provide any reasonable workaround, except making the problem more visible. You should be simply aware of it until Microsoft provides a fix. (Please let me know if there already is a fix that I missed.)

I created a rule to check for a certain code pattern and I added the pattern to a few objects for testing. Although the rule seemed to work, some objects were not included (SalesLine table, for instance). Soon I discovered that it happened due to an exception thrown when checking rules for these objects. You can see the exception in the debugger, but it’s not displayed anywhere in the UI, unless you make the following one-lin change in SysUpgradeRuleRunner.processUtilElement():

if(Microsoft.Dynamics.AX.Framework.Tools.CodeUpgradeTool.Parser.Severity::Fatal == xppParserSeverity)
{
    error = diagnosticItem.ToString();
    error(error); // new code to display the error
    continue;
}

If you use the Code Upgrade Tool, I would recommend to add this modification to your application. It will at least let you know that an object was skipped.

This is the error message I got:

An error happened while executing PipelineTypeResolverPipelineEntryCould not load file or assembly ‘Microsoft.Dynamics.Retail.StoreConnect.TransAutomClient, Version=0.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35′ or one of its dependencies. The given assembly name or codebase was invalid. (Exception from HRESULT: 0x80131047)

What’s going on? First let me explain what happens when you run the Code Upgrade Tool:

  1. It finds all code in the current layer
  2. Each application object (such as a class) is loaded for analysis. Note that the whole object is loaded, not just the code in the current layer.
  3. In some cases, especially when a .NET type is used in code, AX loads all assemblies listed under AOT > References (so it can later look for types in these assemblies). It’s normally done just once and cached for subsequent calls.

The problem is that one of these references can’t ever be loaded. It’s the reference called TransAutomClient_x64, which refers to assembly Microsoft.Dynamics.Retail.StoreConnect.TransAutomClient for processor architecture AMD64. Dynamics AX client is a 32-bit process and it can’t load this assembly, therefore such as attempt must fail. I believe that Code Upgrade Tool would work without any problem if this reference was removed from AOT (but I can’t prove it, because it requires deleting a SYS-layer object).

AX tries to load .NET assemblies in several cases. For instance, if a .NET type is used in variable declaration or as method return type. It also happens when a static method is called on a .NET type, or when AX suspects it might be such a call. I noticed that even calls to table map methods (such as inventItemPrice.InventPriceMap::pcsPrice()) triggers loading of assemblies, because the syntax is exactly the same as for .NET calls (compare with System.Environment::GetLogicalDrives(), for example).

What can be done about it? I don’t think we can do anything by ourselves. What Microsoft should do is simply skipping assemblies that can’t be loaded. The process might fail later, if some code actually uses types defined in that assembly, but that’s inevitable. What happens now is much worse – it fails even if the library isn’t used by any code at all. There is still a potential issue that the tool loads assemblies even if no .NET types are needed (because of table maps calls, as above), but it wouldn’t really cause any harm if assembly loading behaved reasonably.

I simultaneously logged this issue on Connect (link). I really hope that Microsoft will address it soon, because it significantly affects usefulness of this great tool.