Detection of code upgrade conflicts in F&O

When you overlayered an application element (e.g. a method or a form) in Dynamics AX, a copy was saved in a higher layer. You modified the object there, therefore you ended up with two copies of the same element – the original one in a lower layer (such as SYS) and your modified one in a layer like CUS.

A problem occurred when Microsoft updated the element, either by directly changing its code or by introducing an additional copy in another layer, such as SYP or GLS. Your copy based on the older version effectively hid those changes, unless you upgraded your code and incorporated them. The upgrade could be relatively difficult, especially if developers didn’t think about upgrades in advance.

In F&O, you can’t use overlayering anymore, therefore your changes can’t hide standard code (in most cases). Microsoft gives you a new version, your extensions get applied and everything is fine. The difficult and expensive code upgrade process isn’t needed anymore.

Almost.

There are still breaking changes and new features that you need to take into account. For example, if you have extensions of Sales order header V2 entity and Microsoft introduces V3, you need to add your extensions to the new version too.

But sometimes we introduce exactly the same problem that we used to have in Dynamics AX. We can’t create a copy by overlayering, but we still can manually duplicate an element (a method, a data entity etc.). For example, we want to use an Microsoft’s internal class, therefore we duplicate the class in our model and use it from our code. Then Microsoft change their code, but we’re still using the old version, until we notice that problem and apply the same changes to our copy.

It’s the same problem as in Dynamics AX, but now we’re in even worse position. There we used to have tools to detect code conflicts for us (and even to fix some of them automatically), but we don’t have them in F&O.

With layers, we knew that our element is a copy of a standard one. For example, it was clear that our CustTable form in CUS layer was related to CustTable form is SYP layer. There was a tool that could compare two versions of an application, notice that CustTable form changed and that we have overlayered it, therefore we have an upgrade conflict there. It could tell us how Microsoft changed the form and what changes we made to the older version. Then we had to merge these two sets of changes.

Without layers, duplicating an element means creating a new one with a different name. For example, I could duplicate CustTransEntity and create XYZCustTransEntity. There isn’t anything linking these two entities. Without a careful examination, we can’t say whether XYZCustTransEntity was created as a copy of CustTransEntity or not. Therefore even knowing which elements we should check is problematic.

Let’s assume for now that we’re able to do it and we have a list of standard elements and our copies.

When upgrading the application, we have to compare the old and the new version to identify changed elements. It’s not particularly difficult – it means comparing the files and if they’re different, we can also check individual elements in XML files. We’re interested just in those objects that we’ve duplicated.

This would allow us to create a report showing which custom elements we need to upgrade (and possible what has changed in the standard application).

Implementation

I’m not aware of any tools for this purpose; please let me know if you do. And I don’t have any either, but let me consider how it could be done.

To be able to compare two versions of the application, we simply need to have both sets of files somewhere. For example, we can copy standard packages to a separate folder, install an application update and compare the folders. It would be nice to have such a repository already provided by Microsoft. A similar process is needed for ISV solutions as well.

Comparing the sets of files isn’t difficult. For example, Powershell offers Compare-Object cmdlet that can be used to compare file contents or hashes.

Knowing that there is a change in a file is insufficient if we’ve duplicated just something like an individual method. But that can be addressed easily; we just need to extract the particular elements from XML files and compare the values.

The key problem I see is the identification of our copies of standard objects. There isn’t any reliable automatic way; the best what we could get is information that some objects are very similar, but that doesn’t necessarily mean that one is a duplicate of the other and it should be kept in sync. I believe we need to explicitly establish the relation when making a copy. Potentially, Visual Studio could help us with that.

There are many ways how to store the information; one of the simplest is using an XML file. For instance:

<Copies>
  <Entry
    Orig="dynamics://Table/SalesLine/Method/recalculateDatesForDirectDelivery"
    OrigVersion="10.0.41"
    Copy="dynamics://Class/SalesLineXYZ_Extension/Method/xyzRecalculateDatesForDirectDelivery" />
</Copies>

This is easy to process by tools; the disadvantage I see is that it’s not automatically maintained with the code. For example, if I decide to change the prefix of the method (and don’t update the file), the link will break. Of course, we can have a process that checks the file and report invalid references (e.g. during CI builds).

Another approach could, for instance, involve a special attribute for code changes:

[CodeCopy('dynamics://Table/SalesLine/Method/recalculateDatesForDirectDelivery', '10.0.41')]
internal void xyzRecalculateDatesForDirectDelivery()
{ ... }

This can’t be used everywhere, e.g. you can’t put an attribute to an SSRS report. We have tags there, although putting a lot of information there would look ugly. If Microsoft got involved, they could create a new property for this purpose.

Let’s keep it simple and consider the XML file for now.

The process of upgrade conflict detection could look like this:

  1. Read an entry from the XML file.
  2. Identify the source element and the source file (e.g. the class file for a class method).
  3. Get both versions of the file: the original version and the current one.
  4. Compare files. If they’re identical, stop processing of this entry.
  5. If the source element is a method, compare just the method element in both files. If they’re identical, end processing.
  6. Report the upgrade conflict.

It doesn’t address the actual upgrade, but it at least tell us which elements must be upgraded. We would compare the files to see the changes.

I don’t have such a process in place, but I’m thinking about building something like that. I rarely copy existing application elements by myself, but I see a lot of problems in codebases of my clients. There are many duplicated objects that no one maintains at all; they include bugs fixed by Microsoft years ago, they refer to obsolete objects and so on.

If we aren’t able to detect problems caused by our our outdated code, the number of issues increases with every update. With a process like this, we could significantly reduce the degradation. I don’t expect being able to identify all duplication in legacy codebases, but we could still make a big difference if we address at least some (the more important ones) and cover all copies introduced in new code.

Avoiding code duplication in F&O

In the previous post, I explained that duplicating application elements is expensive and we should avoid it whenever possible.

Let me mention a few techniques that you could use.

Obviously, you can create metadata extensions, use Chain of Command for methods, subscribe to events and so on; I’m assuming that you all know that.

What developers sometimes forget is doing enough work on finding the right place for an extension. For example, they want to extend a private method (which isn’t allowed) and don’t notice that the method calls an extensible method or it itself gets called from an extensible method. Or they consider CoC only and disregard modeled events (such as OnUpdating), which are called at a different time than CoC. Don’t also forget to look for delegates, SysPlugin endpoints and so on.

It’s easy to focus on CoC and forget the basics of object-oriented programming. For instance, maybe you need inheritance and not an extension. Maybe you need a combination of both, e.g. you implement a child class and then create an extension of a factory method (such as construct()) to get your new class used instead of the standard one. Creativity may be need to design an extension.

Sometimes you need an object sharing some, but not all, behaviour with an existing one. Then composition may be a better alternative to duplication. For example, let’s say we need a custom data entity similar to a standard entity, but with slightly different behaviour required by a particular integration scenario. Instead of duplicating the entity, you may create a new entity with the standard entity as a data source and override methods of your entity, use different mandatory fields or so.

Sometimes no change is needed at all. For example, someone in the Community Forum wanted to duplicate an existing entity and remove some fields to exclude them from OData messages. But that doesn’t require a new entity; OData allows you to select the fields you want (e.g. $select=ID,Name). Make sure you explore your options before jumping into code.

There are surely situations when you can’t extend or reuse existing logic and you can solve the problem by duplicating an element (e.g. an internal method). If the duplication could be avoided by getting the existing element refactored, create an extensibility request and get it fixed. Devote some time to the description of your scenario and the design of the requested change. My experience is that extensibility requests are accepted promptly if you make the case clear. Unfortunately, it still takes a long time before the change gets delivered.

And if everything fails you you must duplicate an existing element, be aware of the risks (discussed in The cost of code duplication) and plan how to mitigate them.

The cost of code duplication

What I want to talk about today is a special case of code duplication: when we take an F&O application element created by some other company (Microsoft, ISVs or so) and create a copy in our model.

Usually, people do it to deal with code that can’t be accessed or extended, e.g. when we want to use code of an internal class. Duplicating the element and adjusting its code can be done in seconds; it’s much easier than carefully thinking about other solutions, creating extensibility requests and so on.

But it’s expensive in a long term. Everything is fine as long as the original element remains the same, but what if Microsoft fixes a bug, ISVs adds a new feature by extending the standard element or so? You have two options:

1. Identify changes and apply them to your copy

When you receive a new version (of the standard F&O application, of an ISV solution), you’ll find changes of all elements that you’ve duplicated (and their extensions) and you’ll implement them in your copies too.

Both the identification and the implementation requires time and effort and as with any other development, there is a risk of making a mistake. A big problem is also a lack of tools helping with this scenario.

2. Ignore the situation

Unfortunately, this is the usual approach. You keep using the old logic and hope that it won’t cause any problems. Note that this doesn’t just mean that you’re missing a particular feature. The behavior may be unpredictable, because you may be mixing the old and the new logic. For example, Microsoft two changes methods to fix a bug. One change gets applied, but the other one doesn’t, because you’re using a copy with the old code. What will happen is impossible to say; it may end up with corrupted production data or anything. The problems gets bigger and bigger over time.

Both approaches are expensive. It takes time to proactively maintain all copies every time when you get a new version of models from Microsoft or ISVs. What is the cost of ignoring the problem is hard to predict: it may be fine, it may cause you some troubles while dealing with missing features or olds bugs (already fixed in standard code), it may ruin your application update schedule (because you find too late that your code doesn’t work correctly on the new version) or it may cause something catastrophic. It’s not guaranteed to fail, but it’s a lot of risk.

When you consider duplicating an application element owned by another company, you should think about the cost of maintaining the copy. You should try to avoid the duplication, because then you don’t have to deal with these problems. If you have to, you should have a maintenance process used every time when you apply a new version.

If the reason for duplication is that the object can’t be extended or reused in the way you need, create an extensibility request to get the actual problem fixed. If you can’t wait for the fix (it takes a lot of time indeed), duplicate the element but consider it a temporary solution. Add a reminder to re-implement it when it becomes possible.

And if you have to duplicate something, make sure you do it right. For example, a common mistake is duplicating a data entity and keeping the same label, despite of getting compiler warnings about it. Only one of them will have a label in the list of entities in GUI.

GitHub Copilot in Visual Studio

Last time, I mentioned using of GitHub Copilot for X++ development, but I didn’t realize that not everyone is aware of the option of using GitHub Copilot in Visual Studio. A lot of examples on Internet shows it in Visual Studio Code and don’t mention Visual Studio at all, which may be confusing.

To use GitHub Copilot in Visual Studio, you need Visual Studio 2022. And ideally version 17.10 or later, because GitHub Copilot is integrated there as one of the optional components of Visual Studio. Because it’s optional, you may need to run the VS installer and add the component. To make is easier for you, VS offers a button to start the installation:

You can also add GitHub Copilot to older versions of VS 2022 as extensions, but simply using a recent version of VS makes a better sense to me. You can learn more in Install GitHub Copilot in Visual Studio, if needed.

You’ll also need a GitHub Copilot subscription (there is a one-month trial available) and be logged in GitHub in Visual Studio.

Then you’ll start getting code completion suggestion, can use Alt+/ shortcut inside the code editor to interact with Copilot, chat with the Copilot and so on.

X++ documentation comments by GitHub Copilot

I’ve finally started looking more closely on GitHub Copilot. What it can do with languages like C# is impressive; I also checked that it can help with things like Powershell scripts or Excel formulas.

But because I’m still primarily an X++ guy, I’m keen to explore how it can help there. So far, I tried asking GitHub Copilot to explain X++ code snippets, which was almost useless in most cases.

But I’ve found another use cases where it can save my time: creation of documentation comments. For example, this is pretty good:

Of course, good comments often contain information that isn’t included in the code as such, such as the business requirement or why we made a particular implementation decision. On the other hand, most methods are quite simple, but a good practice is having documentation comments for them anyway. Writing them isn’t difficult, but it takes time and it’s boring. I don’t complain if GitHub Copilot helps me with that. 🙂