Electronic reporting: Method returning a list of records

I had a scenario in electronic report where I wanted to reuse existing X++ code that generates some temporary table records for reporting purpose. Therefore I wanted electronic reporting to call my X++ method (for a particular context, an invoice in this case) and use the list of records returned by this method.

Electronic reporting supports method calls, but all the information I found on internet was about methods returning a single primitive value or a single record.

But it turned out it’s supported and actually quite easy to use.

The key is that the method must return on of the supported data types and it must be decorated with ERTableNameAttribute. Like this:

[ERTableName(tableStr(MyTable))]
public static RecordLinkList getData() { ... }

The supported data types are:

  • Query
  • RecordLinkList
  • RecordSortedList
  • any class implementing System.Collections.IEnumerable interface (.NET arrays, lists etc.)

If you wonder how I know it, I found the definition in ERDataContainerDescriptorBase.isRecordCollection().

One way of using such a method is defining a static method in class and using it the Class data source in ER.

Let me also extend the example with a parameter, to give the method some instructions about what it should generate:

public class ERTableDemo
{
    [ERTableName(tableStr(MyTable))]
    public static RecordLinkList getData(str _param) { ... }
}

In a model mapping, add a new Class data source and add your class. You’ll see a list of methods and if you expand the method, you’ll see the table fields available for binding.

When binding the method to a record list, we can provide a value for the parameter. We also can bind field values as usual:

But I would rather use an instance method on a table, which would produce data related to the given record (such as an invoice).

I saw instance methods with ERTableNameAttribute in standard code, therefore I knew it can be done on tables, but I wasn’t sure that ER takes table extensions into account.

I tried an extension like this:

[ExtensionOf(tableStr(CustInvoiceJour))]
public final class CustInvoiceJourMy_Extension
{
    [ERTableName(tableStr(MyTable))]
    public RecordLinkList getMyTableRecords()
    {
        ...
    }
}

and I am able to use it in ER model mapping:

This is ideal.

Getting attributes in X++

In X++, you can decorate classes and methods with attributes. Attributes were added in AX 2012 (I believe), where the typical use case was a definition of data contracts. They’re much more common in F&O, because they’re also used for CoC and event handlers.

For most developers, attributes are something defined by Microsoft and used by standard frameworks, but it’s something we can utilize for our own (advanced) scenarios when suitable.

You can easily define your own attributes (an attribute is simply a class inheriting from SysAttribute) and also check whether a class or method has a particular attribute and get attribute parameter values (e.g. the country from CountryRegionAttribute).

To get the attributes, you can use either Dict* classes or the new metadata API.

DictClass and DictMethod classes offer getAttribute() and getAttributes() to get attributes of a specific type, and getAllAttributes() to get all. For example:

DictClass dictClass = new DictClass(classNum(AssetAcquisitionGERContract));
Array attributes = dictClass.getAllAttributes();
 
for (int i = 1; i <= attributes.lastIndex(); i++)
{
    SysAttribute attribute = attributes.value(i);
    info(classId2Name(classIdGet(attribute)));
}

And here is the same thing with the metadata API:

using Microsoft.Dynamics.AX.Metadata.MetaModel;
...
AxClass axClass = Microsoft.Dynamics.Ax.Xpp.MetadataSupport::GetClass(classStr(AssetAcquisitionGERContract));
var enumerator = (axClass.Attributes as System.Collections.IList).GetEnumerator();
 
while (enumerator.MoveNext())
{
    AxAttribute attribute = enumerator.Current;
    info(attribute.Name);
}

Run settings for SysTest

When you execute automated tests of X++ code with SysTest, the test service (SysTestService class) gets called with some parameters, defined in SysTestRunnerSettings:

You could, for example, set granularity to execute just unit tests and skip integration tests, or produce a trace file for diagnostics.

You may want to use such parameters in automatic processes (e.g. running certain types of tests on gated builds) or directly in Visual Studio.

Visual Studio supports such configuration with .runsettings files (see Configure unit tests by using a .runsettings file). In Visual Studio, you can create a .runsettings file (which is a simple XML file), put it to your solution directory and it’ll be used automatically when running tests from Text Explorer. You can also select a particular file in Test > Configure Run Settings.

I wondered if the same approach can be used for SysTest parameters and it can indeed, you just need to put these parameters into SysTest element (and you must know the correct property names).

<?xml version="1.0" encoding="utf-8"?>
<RunSettings>
  <SysTest>
      <RunInNewPartition>false</RunInNewPartition>
      <TraceFile>c:\temp\TestTrace.txt</TraceFile>
  </SysTest>
</RunSettings>

Detection of code upgrade conflicts in F&O

When you overlayered an application element (e.g. a method or a form) in Dynamics AX, a copy was saved in a higher layer. You modified the object there, therefore you ended up with two copies of the same element – the original one in a lower layer (such as SYS) and your modified one in a layer like CUS.

A problem occurred when Microsoft updated the element, either by directly changing its code or by introducing an additional copy in another layer, such as SYP or GLS. Your copy based on the older version effectively hid those changes, unless you upgraded your code and incorporated them. The upgrade could be relatively difficult, especially if developers didn’t think about upgrades in advance.

In F&O, you can’t use overlayering anymore, therefore your changes can’t hide standard code (in most cases). Microsoft gives you a new version, your extensions get applied and everything is fine. The difficult and expensive code upgrade process isn’t needed anymore.

Almost.

There are still breaking changes and new features that you need to take into account. For example, if you have extensions of Sales order header V2 entity and Microsoft introduces V3, you need to add your extensions to the new version too.

But sometimes we introduce exactly the same problem that we used to have in Dynamics AX. We can’t create a copy by overlayering, but we still can manually duplicate an element (a method, a data entity etc.). For example, we want to use an Microsoft’s internal class, therefore we duplicate the class in our model and use it from our code. Then Microsoft change their code, but we’re still using the old version, until we notice that problem and apply the same changes to our copy.

It’s the same problem as in Dynamics AX, but now we’re in even worse position. There we used to have tools to detect code conflicts for us (and even to fix some of them automatically), but we don’t have them in F&O.

With layers, we knew that our element is a copy of a standard one. For example, it was clear that our CustTable form in CUS layer was related to CustTable form is SYP layer. There was a tool that could compare two versions of an application, notice that CustTable form changed and that we have overlayered it, therefore we have an upgrade conflict there. It could tell us how Microsoft changed the form and what changes we made to the older version. Then we had to merge these two sets of changes.

Without layers, duplicating an element means creating a new one with a different name. For example, I could duplicate CustTransEntity and create XYZCustTransEntity. There isn’t anything linking these two entities. Without a careful examination, we can’t say whether XYZCustTransEntity was created as a copy of CustTransEntity or not. Therefore even knowing which elements we should check is problematic.

Let’s assume for now that we’re able to do it and we have a list of standard elements and our copies.

When upgrading the application, we have to compare the old and the new version to identify changed elements. It’s not particularly difficult – it means comparing the files and if they’re different, we can also check individual elements in XML files. We’re interested just in those objects that we’ve duplicated.

This would allow us to create a report showing which custom elements we need to upgrade (and possible what has changed in the standard application).

Implementation

I’m not aware of any tools for this purpose; please let me know if you do. And I don’t have any either, but let me consider how it could be done.

To be able to compare two versions of the application, we simply need to have both sets of files somewhere. For example, we can copy standard packages to a separate folder, install an application update and compare the folders. It would be nice to have such a repository already provided by Microsoft. A similar process is needed for ISV solutions as well.

Comparing the sets of files isn’t difficult. For example, Powershell offers Compare-Object cmdlet that can be used to compare file contents or hashes.

Knowing that there is a change in a file is insufficient if we’ve duplicated just something like an individual method. But that can be addressed easily; we just need to extract the particular elements from XML files and compare the values.

The key problem I see is the identification of our copies of standard objects. There isn’t any reliable automatic way; the best what we could get is information that some objects are very similar, but that doesn’t necessarily mean that one is a duplicate of the other and it should be kept in sync. I believe we need to explicitly establish the relation when making a copy. Potentially, Visual Studio could help us with that.

There are many ways how to store the information; one of the simplest is using an XML file. For instance:

<Copies>
  <Entry
    Orig="dynamics://Table/SalesLine/Method/recalculateDatesForDirectDelivery"
    OrigVersion="10.0.41"
    Copy="dynamics://Class/SalesLineXYZ_Extension/Method/xyzRecalculateDatesForDirectDelivery" />
</Copies>

This is easy to process by tools; the disadvantage I see is that it’s not automatically maintained with the code. For example, if I decide to change the prefix of the method (and don’t update the file), the link will break. Of course, we can have a process that checks the file and report invalid references (e.g. during CI builds).

Another approach could, for instance, involve a special attribute for code changes:

[CodeCopy('dynamics://Table/SalesLine/Method/recalculateDatesForDirectDelivery', '10.0.41')]
internal void xyzRecalculateDatesForDirectDelivery()
{ ... }

This can’t be used everywhere, e.g. you can’t put an attribute to an SSRS report. We have tags there, although putting a lot of information there would look ugly. If Microsoft got involved, they could create a new property for this purpose.

Let’s keep it simple and consider the XML file for now.

The process of upgrade conflict detection could look like this:

  1. Read an entry from the XML file.
  2. Identify the source element and the source file (e.g. the class file for a class method).
  3. Get both versions of the file: the original version and the current one.
  4. Compare files. If they’re identical, stop processing of this entry.
  5. If the source element is a method, compare just the method element in both files. If they’re identical, end processing.
  6. Report the upgrade conflict.

It doesn’t address the actual upgrade, but it at least tell us which elements must be upgraded. We would compare the files to see the changes.

I don’t have such a process in place, but I’m thinking about building something like that. I rarely copy existing application elements by myself, but I see a lot of problems in codebases of my clients. There are many duplicated objects that no one maintains at all; they include bugs fixed by Microsoft years ago, they refer to obsolete objects and so on.

If we aren’t able to detect problems caused by our our outdated code, the number of issues increases with every update. With a process like this, we could significantly reduce the degradation. I don’t expect being able to identify all duplication in legacy codebases, but we could still make a big difference if we address at least some (the more important ones) and cover all copies introduced in new code.

Avoiding code duplication in F&O

In the previous post, I explained that duplicating application elements is expensive and we should avoid it whenever possible.

Let me mention a few techniques that you could use.

Obviously, you can create metadata extensions, use Chain of Command for methods, subscribe to events and so on; I’m assuming that you all know that.

What developers sometimes forget is doing enough work on finding the right place for an extension. For example, they want to extend a private method (which isn’t allowed) and don’t notice that the method calls an extensible method or it itself gets called from an extensible method. Or they consider CoC only and disregard modeled events (such as OnUpdating), which are called at a different time than CoC. Don’t also forget to look for delegates, SysPlugin endpoints and so on.

It’s easy to focus on CoC and forget the basics of object-oriented programming. For instance, maybe you need inheritance and not an extension. Maybe you need a combination of both, e.g. you implement a child class and then create an extension of a factory method (such as construct()) to get your new class used instead of the standard one. Creativity may be need to design an extension.

Sometimes you need an object sharing some, but not all, behaviour with an existing one. Then composition may be a better alternative to duplication. For example, let’s say we need a custom data entity similar to a standard entity, but with slightly different behaviour required by a particular integration scenario. Instead of duplicating the entity, you may create a new entity with the standard entity as a data source and override methods of your entity, use different mandatory fields or so.

Sometimes no change is needed at all. For example, someone in the Community Forum wanted to duplicate an existing entity and remove some fields to exclude them from OData messages. But that doesn’t require a new entity; OData allows you to select the fields you want (e.g. $select=ID,Name). Make sure you explore your options before jumping into code.

There are surely situations when you can’t extend or reuse existing logic and you can solve the problem by duplicating an element (e.g. an internal method). If the duplication could be avoided by getting the existing element refactored, create an extensibility request and get it fixed. Devote some time to the description of your scenario and the design of the requested change. My experience is that extensibility requests are accepted promptly if you make the case clear. Unfortunately, it still takes a long time before the change gets delivered.

And if everything fails you you must duplicate an existing element, be aware of the risks (discussed in The cost of code duplication) and plan how to mitigate them.