Summarized values in AX form

Users sometimes want to see both individual transactions in a grid of a Dynamics AX form, and some summarized values, such as the total amount or the number of lines (often above or below the grid). Iterating through the whole datasource and getting values one by one isn’t efficient, especially if the query returns many rows. A much better solution is taking the query, modifying it to use an aggregation function (such as SUM() or COUNT()) and sending a single, efficient request to database.

My example assumes that I have a form showing customer invoice lines and I want to calculate the total amount of all lines fulfilling current filters (and show it in a separate control).

public void updateTotal()
    // Copy the query
    Query query = new Query(CustInvoiceTrans_ds.queryRun().query());
    QueryBuildDataSource qbds = query.dataSourceTable(tableNum(CustInvoiceTrans));    
    QueryRun qr;
    CustInvoiceTrans summedTrans;
    // Sum LineAmountMst
    qbds.addSelectionField(fieldNum(CustInvoiceTrans, LineAmountMst), SelectionField::Sum);
    qr = new QueryRun(query);
    // Run the query;
    // Get the data
    summedTrans = qr.get(tableNum(CustInvoiceTrans));
    // Set the new sum to the control

The first statement is extremely important, because it defines which query you want to use. I take CustInvoiceTrans_ds.queryRun().query(), because I want to respect filters defined by users. If it wasn’t the case, I would use CustInvoiceTrans_ds.query(). Both scenarios are valid; the choice depends on your functional requirements.

It’s also worth noting that I modified a copy of the query. If I modified the query used by the datasource, I would actually get the summed result in my grid, which wouldn’t make sense.

Then I just have to call the method every time when the datasource query executes.

public void executeQuery()

Refreshing form parts

When using form parts in AX 2012, you sometimes need to explicitly refresh their data based on an event in the main form. It may not be completely obvious how to do it, but it’s not too complicated in the end.

Form parts are actually forms by themselves and if you know how to manipulate forms at runtime, you know how to work with parts too. The tricky part it getting a reference to a form part.

One of possible solutions is adding the following method to SysSetupFormRun class (so it’s available to all forms):

public FormRun getFormPartByName(str _name)
    PartList partList = new PartList(this);
    FormRun part;
    int i;
    for(i = 1; i <= partList.partCount(); i++)
        part = partList.getPartById(i);
        if ( == _name)
            return part;
    return null;

As you see, it iterates all parts in the form and finds a part with the given name.

Then you can call it from your form to get a reference to a particular part and do anything you like with it, such as refreshing the data:

public void refreshMyFactBox()
    SysSetupFormRun formRun = this as SysSetupFormRun;
    FormRun factBox = formRun.getFormPartByName('MyInfoPart'));
    if (factBox)

Note that if it’s a form part, you have to provide the name of the underlying form, such as:

FormRun factBox = formRun.getFormPartByName(formStr(MyFormPartForm));

Configuration of LCS System Diagnostics

System Diagnostics from Dynamics Lifecycle Services is a really handy tool – it collects data about your Dynamics AX environments and warns you if your setup is not optimal from performance perspective, if a number sequence is running out of available numbers, if batches are failing and so on. It allows you to act proactively, rather than waiting for something serious to happen.

The only problem with this tool is configuration, because you have to grant the service account permissions to quite a few things, but you typically don’t want to allow everything. The recommended configuration therefore cherry-picks individual items to set permissions for, such as individual registry keys. It’s well-documented, unfortunately it still consists of a large number of manual steps and it’s very easy to do something wrong, especially if you have many servers to configure.

Below you can find scripts automating a few tasks, such as adding the service account to necessary user groups. It’s by no means exhaustive and you’ll still have to do many things manually, but it’s better than nothing. I didn’t mean it as any ambitious project; I merely implemented a few easy wins last time when I was configuring System Diagnostics – and now I’m sharing it with you.

Examples below expect that you’ve set variables with the domain and service account name:

$domain = 'MyDomain'
$accountName = 'LcsServiceAccount'

You’ll likely need to run the scripts “As administrator”.

# Adds system diagnostics service account to AX
Function Add-LcsAccountToAX
        [string]$AxUserId = 'LcsDiag'
    # Requires AX management module (e.g. running in AX management shell)
    New-AXUser -UserName $AccountName -UserDomain $Domain -AXUserId $AxUserId -AccountType WindowsUser
    Add-AXSecurityRoleMember -AxUserID $AxUserId -AOTName SysBusinessConnectorRole
# Usage:
Add-LcsAccountToAX -User $accountName -Domain $domain
# Grants access to registry keys
    $rule = New-Object System.Security.AccessControl.RegistryAccessRule ($Account,'ReadKey','ObjectInherit,ContainerInherit','None','Allow')
    $acl = Get-Acl $RegKey
    $acl | Set-Acl
# Usage:
$domainAccount = "$domain\$accountName"
# Run on AOS server
Set-RegistryReadPemissions -Account $domainAccount -RegKey 'HKLM:\System\CurrentControlSet\services\Dynamics Server\6.0'
# Run on database server
Set-RegistryReadPemissions -Account $domainAccount -RegKey 'HKLM:\System\CurrentControlSet\Control\PriorityControl'
# Adds service account to Windows user groups
Function Add-DomainUserToLocalGroup
        [string]$Computer = $Env:ComputerName
    foreach ($g in $Group)
        $adsi = [ADSI]"WinNT://$computer/$g,group"
# Usage:
$groups = 'Event Log Readers','Distributed COM Users','Performance Monitor Users' 
Add-DomainUserToLocalGroup -Group $groups -Domain $domain -User $accountName

Preview of Release Management

When you develop a new solution, it’s useless until you deploy it to a production environment where users can actually use it. Before you can do it, you typically need a few extra deployments to test environments. It takes time that people could use for working on new features and it often require many manual steps, which is always error-prone. Automation can both save resources and prevent people from doing something wrong. The need is getting more urgent as release cycle shorten to deliver new features to users as soon as possible.

Release Management for Visual Studio can help you to do that. It allows you to define the release process and approvers, execute deployments, track their history, send email notifications and so on. You still have to define what steps will run and what they’ll do; the value of Release Management is in handling all the infrastructure around.

I spent some time preparing releases of Dynamics AX with Release Management for Visual Studio 2013 and 2015. Unlike Kenny Saelen in his blog series, I used so-called vNext release templates, which is a more lightweight approach, doesn’t require deployment agents and it uses either Powershell or Chef to handle all release tasks (here is documentation, if you’re interested).

I was a bit disappointed, because it lacked quite a few features I needed, especially VSO support for on-premise environments. Also diagnostics of release tasks was quite difficult and a few other things just didn’t behave as I would like.

Therefore I was really keen to jump into a preview version of the new Release Management, which promised to address several of my problems, and I’m happy to announce that it does. It doesn’t do everything I would like (at least not yet), but it’s a great improvement.

Let me demonstrate the basics on a simplified release of Dynamics AX. The project uses Visual Studio Online as the version control and XAML builds running on an on-premise server.

You don’t need your own Release Management server for on-premise deployments anymore; it’s all hosted in Visual Studio Online. You can find it on the Release tab, but currently (October 2015) only if you’re participating in the preview program.


When you create a new release template, you have to define one or more environments. Then you can manage and track how far you’ve released your code, such us that it’s currently in the FAT environment.


You need some tasks to do the actual work. As you can see, you have quite a few tasks to choose from:


Nevertheless my AX release is implemented completely in Powershell (I just slightly modified my existing scripts, based on DynamicsAxCommunity module), therefore I don’t use any other type of task:

To get files to deploy, link the release definition with an artifact source. In my case, I linked it with a team build, therefore my release will get all files produced by a particular build.


Here I ran into a limitation, because my release scripts are not in the same team project as my AX code and I can’t easily link artifacts from other projects. That’s why I’ve put scripts to my environment and refer to them through absolute paths (\\Server1\RMScripts\…) accessible from all machines. If I could link artifacts from two projects, Release Management would download release scripts for me.

When triggering a new release, I can choose which build contains application files to deploy. I could also configure it to run automatically after a build (such as deploying a test environment immediately after a night build of latest code).


You can track progress of your release directly in your browser, including output from your tasks (such Install-AXModel and axbuild in the picture below). It’s wise keeping tasks relatively small, so you can easily see what’s running (or what failed, in a worse case).


You need an agent installed in your environment, to handle all communication with VSO and execute tasks. Tasks can connect to other machines and deploy things there, therefore you may have a single agent deploying to dozen servers. On the other hand, you may want more agents, to running multiple releases in parallel or having agents with different capabilities (such as different software installed).

The agent can run as a Windows service, but you can also start it as a normal command-line application and monitor what it does.


And of course, you can see history of all releases, their status, logs and so on.


Although it’s still in a preview stage, it already runs really well. And because it’s so lightweight and allow integration of Powershell scripts with just a few clicks, I think I’ll use it even for small things that didn’t require any sophisticated solution before, but that will benefit from being easily available to the whole team and from archiving history of all runs.

The new Release Management is expected to be released this year; unfortunately I don’t know anything for specific in the moment. In the meantime, you can learn more about it from documentation on MSDN.

Unit testing in AX – Dependencies

In my previous post on unit testing, I explained how to handle a few simple situations, nevertheless things often get more complicated. One such complication occurs if the class that you want to test uses other objects. It may be a problem for several reasons – it’s code that we don’t want to test in the moment (because it would become too complex), it depends on something that we can’t control by our test (e.g. values returned from an external system), it’s too slow and so on.

The solution is to remove tight coupling between units and allow substituting dependency objects with something provided by the test (= dependency injection). Following these design principles will also make your code more flexible and easier to maintain, therefore designing code for testing will force you to make your code better in several aspects.

Let me show you a very simple example, where I have a single dependency on database:

public Amount availPhysical(ItemId _itemId, InventDimId _inventDimId)
    InventSum is = InventSum::find(itemId, inventDimId);
    return is.PostedQty
        + is.Received
        - is.Deducted
        + is.Registered
        - is.Picked
        - is.ReservPhysical;

If we want to test this method with different values, we would have to save new values to database before running each test. It can be complicated due to references to other tables, and changing data in database would influence everybody using the same data, including other running tests. There are some ways how to do it, but we’ll now try to avoid database completely. Also note that operations in memory are much faster than accessing database, so you can run huge amount of tests in very short time.

If you look at the method, you’ll notice that it does two different things – it finds an InventSum record and it does the calculation. If we split it to two methods, each with its own responsibility, testing the calculation becomes trivial.

// The complicated calculation logic was extracted to this method
public Amount calcAvailPhysical(InventSum _is)
    return _is.PostedQty
        + _is.Received
        - _is.Deducted
        + _is.Registered
        - _is.Picked
        - _is.ReservPhysical;
// The remaining code is here
public Amount availPhysical(ItemId _itemId, InventDimId _inventDimId)
    return this.calcAvailPhysical(InventSum::find(_itemId, _inventDimId));

Now we can easily write tests like this:

InventSum is;
is.Received = 8;
is.ReservedPhysical = 5;
this.assertEquals(3, new MyClass.calcAvailPhysical(is));

This covers calcAvailPhysical(), but what about availPhysical()? Honestly, I don’t think it’s worth investing time into. It calls only two methods, without any conditions or anything. The only thing it could go wrong is using wrong parameters for find() and it may be covered by code review and other types of testing.

You might say that you to want cover all code by unit tests and it would indeed be nice, but on the other hand, you have to think about the price and whether you really want to test the data access layer at all. This example demonstrates the approach I prefer – extracting complicated pieces of logic and test them thoroughly. The parts that are difficult to test are kept trivial and unlikely to change very often.

There surely are cases when testing against database is needed, such as when the most important logic to test is a database query. I’m going discuss database access in a later post.

Let’s look at a few other cases involving dependencies.

Maybe you have quite a few values you would have to pass to your method and the number of parameters would grow quickly. You can always create a class functioning as a data container and use it instead of many parameters (think of data contract – it’s the same design pattern).

In the example above, the method got everything in parameter. But it’s often not the best design; another variant is setting dependencies to object’s state.

Here I have a little bit more complex example. I’m going to test ClassToTest; unfortunately it uses DependencyObject which I don’t want to cover by my test.

class ClassToTest
    DependencyObject depObj;
        depObject = DependencyObject::construct(MyParameters::find().DependencyType);

DependencyObject is instantiated inside ClassToTest, therefore my test can’t control it. Furthermore, it’s initialized in different ways depending on a parameter in database, therefore my test would have to know about this implementation detail and change the parameter as needed (with all the troubles mentioned before).

Let’s just slightly refactor the class. We’ll allow setting depObj via a parm* method, while keeping the initialization logic in place in a factory method:

class ClassToTest 
    DependencyObject depObj;
    // Caller code can set the dependency through this method
    public DependencyObject parmDependencyObject(DependencyObject _depObj = depObj)
        depObj = _depObj;
        return depObj;
    // The class still offers a method for initialization from the parameter,
    // but it's now just one of possible ways.
    public static ClassToTest newDefault()
        ClassToTest c = new ClassToTest();
        return c;

Now tests can create DependencyObject in any way they need and they don’t have to use the parameter table at all.

ClassToTest c = new ClassToTest();
DependentObject d = ...
... act and assert ...

This makes the solution more flexible too – if you have a new requirement that needs depObj created in some other way, you can use reuse the class without any change. This is yet another example where designing for testability leads to more robust architecture.

Note that you don’t want to always expose all dependencies to caller code, because it would reveal implementation details that should stay hidden in the class.  As with any other development, you have to weigh many aspects when designing architecture of your program. By the way, if you write .NET components, you can hide such logic inside assemblies and access internal members from tests with the help of InternalsVisibleTo.

The technique shown above work great if all what we need is to inject specific values to the tested unit, but sometimes we want to inject different behaviour. In many cases, we want simply to “turn off” some logic interfering with the test.

For example, doStuff() runs some logic I want to test and it prints a report.

void doStuff()
    ... some interesting logic ...;

To avoid all potential troubles with printing inside test, we can replace actual reportRun with a custom subclass overriding run() method and doing nothing:

MySRSReportRunDoingNothing extends SRSReportRun
    public void run()
        // Nothing here

The test will inject the dummy object (so called test stub) into the class under test:

ClassToTest c = new ClassToTest();
c.parmReportRun(new MySRSReportRunDoingNothing());
... assert ...

By the way, there may be better ways for designing the code – for example, I could extract the logic to test into a separate method or add a flag controlling whether the report should be printed or not. Therefore if you find yourself writing many test stubs, maybe you should focus more on design for testability.

You can do much more with these test stubs. Instead of disabling all functionality as above, you can actually configure them to do whatever is useful for your test. For example, they can simulate access to database or calls to web services and return exactly what you need:

class WebServiceTestStub
    public str getData(str _id)
        if (_id == "Test1")
            return "Data expected by a specific test case"

You can also use these objects to verify whether system under test did what it should have done. For example, you test a piece of logic that should call posting. You don’t want to run the posting itself, but you want to verify that it got called (and didn’t get called if a validation failed). You can create a so-called mock object to remember that post() was called and then write assertions for it.

ClassToTest c = ...
c.parmPosting(new MockPostingSubSystem());;

This goes way beyond mere isolation – it’s a quite different type of testing.

By the way, you can find sophisticated mock frameworks for other languages, which greatly simplify working with these objects. You don’t have to create them by yourself; you can just configure what type you want to mock, what’s the expected behavior and so on.

Just to give some idea, the following test (written in Java with jMock) creates a mock for Subscriber class and defines that its receive() method should be called with a given message. The framework creates the mock object for you at runtime and verifies the expectations. If receive() isn’t called exactly once or the parameter is wrong, the test will fail.

public void testOneSubscriberReceivesAMessage()
    // set up
    Mock mockSubscriber = mock(Subscriber.class);
    Publisher publisher = new Publisher();
    publisher.add((Subscriber) mockSubscriber.proxy());
    final String message = "message";
    // expectations
    mockSubscriber.expects(once()).method("receive").with( eq(message) );
    // execute

Some mocking frameworks allow things like mocking static methods, therefore you could take a piece of code (written without taking testing into account) and mock out inconvenient find() methods, for example. Unfortunately we don’t have any such framework for X++ (Ax Dynamics Mocks never got far enough), but my hope is that we’ll get it (or build it) one day. AX 7 will give us new ways how to achieve it.

Nevertheless mocking frameworks are by no means necessary for unit testing – they just make things more convenient.

When you’re writing a class with a dependency, it’s worth to consider using interfaces instead of specific classes. First of all, it makes your code really flexible without unnecessary restrictions. For example, if you need logging in your class, you need just some object that can accept a logging message and you don’t really care about what kind of object it is. Therefore create an interface and use it as the type of your dependency:

interface ILogger{
    void logMessage(Exception _ex, str _msg)
class MyClass
    ILogger logger;

That’s a nice, robust implementation – you can easily replace one way of logging with another and you can even introduce things like composite loggers calling several other loggers – all without ever changing your class. But it also means that you test stub doesn’t have to inherit from an existing logger – it can be any class implementing the right interface. It’s much easier, because if your test stub inherits code from a parent class, you often have to be very careful to override all methods called by the class under test. With interfaces, the compiler tells you what to implement and you’re sure you don’t inherit anything you don’t want.

You can see that it’s all about designing code in a way that make testing easy. People new to unit testing often don’t realize it and try to test virtually untestable code. It’s no surprise they find it awkward and complicated – they should refactor the code first before testing it. This is an obviously problem with a lot of standard code in Dynamics AX, because you can’t test it and you also don’t want to change it (to avoid upgrade conflicts in future). My advice may be surprising: just don’t it. Test your code and not code developed by somebody else, such as Microsoft and ISVs. Yes, you often modify standard code, but if you keep things separated, you can test your code by unit tests and leave the integration with standard code to a different type of testing. It works really well for me – give it a try.

Although we’ve covered quite a lot today, there are whole books on this topic (even I would have much more to say) and a single blog post can only scratch the surface. I hope it makes some sense to you, because I think it’s what people often miss when starting with unit testing – and why they ultimately give it up. They quickly learn things like assertion methods, but never get to know how to design complex code that still can be decomposed to units for unit testing. I showed you a few basic techniques, but it’s mainly for demonstration. As with any other code, you have to use to use your own developer skills to design the best solution for your particular situation.