Public preview of the new F&O dev experience

Something that was talked about for a long time is getting closer to reality. F&O development will no longer require those huge VMs with SQL Server and everything. Instead, you’ll just install some VS extensions, connect to Dataverse, download F&O code and metadata and start developing. The runtime (web server, database) will be in cloud, which means that running and debugging your changes requires deploying them to an F&O environment in cloud.

Microsoft calls it a unified experience because it’s going in the direction used by other Dynamics products, it utilizes Power Platform and it provides tighter and easier integration between F&O and Power Platform.

Here is a brief introduction by Peter Villadsen: The Public Preview for the Unified Experience is live!. It contains links to the documentation with more details (which is also in preview).

I’m looking forward to the extra flexibility provided by the new solution. It should also have a positive impact on costs, because development teams won’t need so many powerful VMs (although details also also depends on things like Dataverse pricing). My recommendation is trying it and getting familiar with the new approach, but not hurrying with the real adoption, because changes are expected before it becomes generally available.

Monitoring and telemetry in F&O

Applications Insights in an Azure service for monitoring of applications. Many Azure Services support it out of the box – you just connect, say, an Azure Function to Application Insights and it’ll automatically start collecting information about performance, failed requests and so on.

Using Application Insights API from D365FO is possible and several people in past showed custom solutions for that. But now there is also a solution from Microsoft included in F&O out of the box – it’s called Monitoring and telemetry.

You can find quite a few blog posts about the setup, such as this one, therefore I’m not going to duplicate it here.

But if you aren’t familiar with Application Insights / Azure Monitor, let me give you one example how it can be useful. Support personnel and developers are often interested in details of exceptions thrown in the application.

If enabled in F&O, Application Insights automatically collects information about such exceptions. You can see on overview of exceptions in a certain period:

you can query logs for particular exceptions and you can see a lot of details of an individual exception, including X++ call stack:

Notice also the actions available to you, such as the option to create a work item (in Azure DevOps or GitHub) or to see all available telemetry for the user session.

Note that you can use it in all types of environments. It’s most important in production, because you can’t debug there, but collecting extra data is useful in development and test environments too.

Custom messages

Some information, such as exceptions and form access, are collected by F&O automatically (if enabled).

But you also use code to log any information that it’s important for you, such as when something interesting happens, to provide trace message with extra context for debugging and to log custom metrics.

I didn’t particularly like the solution I saw in 10.0.31 and 32, because it didn’t support custom properties, but it has changed. The support was added somewhere between 10.0.32 and 10.0.35.

Let me give you an example. Here I’m adding a trace message from X++, but instead of adding just the message itself, I’m also adding two custom properties:

Map properties = new Map(Types::String, Types::String);
properties.add('Feature', 'My feature 1');
properties.add('MyCorrelationId', '123456');
 
SysGlobalTelemetry::logTraceWithCustomProperties('Custom message from X++', properties)

When looking into logs, I can see not just the textual message, but also my properties as structured data.

And what is even more important, I can easily use properties in filters and graphs. For example:

AppTraces | where Properties.Feature == 'My feature 1'

F&O development with multiple version control workspaces

The usual setup (documented by Microsoft) of version control for F&O is using Team Foundation Version Control and configuring the workspace mapping like this:

$/MyProject/Trunk/Main/Metadatak:\AosService\PackagesLocalDirectory
$/MyProject/Trunk/Main/Projectsc:\Users\Admin123456789\Documents\Visual Studio 2019\Projects

This means that PackagesLocalDirectory contains both standard packages (such as ApplicationSuite) and custom packages, i.e. your development and ISV solutions you’ve installed.

This works fine if you always want to work with just a single version of your application, but the need to switch to a different branch or a different version is quite common. For example:

  • You merged code to a test branch and want to compile it and test it.
  • You found a bug in production or a test environment and you want to debug code of that version, or even make a fix.
  • As a VAR, you want to develop code for several clients in on the same DEV box.

It’s possible to change the workspace to use a different version of code, but it’s a lot of work. For example, let’s say you want to switch from Dev branch to Test to debug something. You have to:

  1. Suspend pending changes
  2. Delete workspace mapping
  3. Ideally, delete all custom files from PackagesLocalDirectory (otherwise you have to later deal with extra files that exist in Dev branch but not Test)
  4. Create a new workspace mapping
  5. Get latest code
  6. Compile custom packages (because source control contains just source code, not runnable binaries)

When you’re done and want to continue with the Dev branch, you need to repeat all the steps again to switch back.

This is very inflexible, time-consuming, and it doesn’t allow you to have multiple sets of pending changes. Therefore I use a different approach – I utilize multiple workspaces and switch between them.

I never put any custom packages to PackagesLocalDirectory. Instead, I create a separate folder for my workspace, e.g. k:\Repo\Dev. Then PackagesLocalDirectory contains only code from Microsoft, k:\Repo\Dev folder contains code from Dev branch, k:\Repo\Test code from Test branch and so on.

Each workspace has not just its version of the application, but also a list of pending changes. It also contains binaries – I need to compile code once when creating the workspace, but not again when switching between workspaces.

F&O doesn’t have to always use PackagesLocalDirectory and Visual Studio doesn’t have to have projects in the default location (such as Documents\Visual Studio 2019\Projects). The paths can be changed in configuration files. Therefore if I want to use, say, the Test workspace, I tell F&O to take packages from k:\Repo\Dev\Metadata and Visual Studio to use k:\Repo\Dev\Projects for projects. Doing it manually would be time-consuming and error-prone, therefore I do it by a script (see more about the scripts at the end).

If my workspace folder contains just custom packages, I can use it for things like code merge, but I couldn’t run the application or even open Application Explorer, because custom code depend on standard code, which isn’t there. I could copy standard packages to each workspace folder, but I like the separation, it would require a lot of time and disk space and I would have to update each folder after updating the standard application.

For example, I could take k:\AosService\PackagesLocalDirectory\ApplicationPlatform and copy it to k:\Repo\Dev\Metadata\ApplicationPlatform. But I can achieve the same goal by creating a symbolic link to the folder in PackagesLocalDirectory. Of course, no one wants to add 150 symbolic links manually – a simple script can iterate the folders and create a symbolic link for each of them.

A few more remarks

  • We’re usually interested in the single workspace used by F&O, but note that some actions can be done without switching workspaces. For example, we can use Get Latest to download latest code from source control (it’s even better if the folder isn’t used by F&O/VS, because no files are locked), merge code and commit changes, or even build the application from command line.
  • If you use Git, branch switching is easier, but you’ll still likely want to keep standard and custom packages in separate folders.
  • Before applying an application update from Microsoft, it’s better to tell F&O to use PackagesLocalDirectory. If you don’t do it and there is a new package, it’ll be created in the active workspace and other workspaces couldn’t see it. You’d have to identify the problem and move the new package to PackagesLocalDirectory.
  • If a new package is added, you’ll also need to regenerate symbolic links for your workspaces.
  • You can have multiple workspaces for a single branch. For example, I use a separate workspace for code reviews, so I don’t have to suspend the development I’m working on.

Scripts

The scripts are available on GitHub: github.com/goshoom/d365fo-workspace. Use them as a base or inspiration for your own scripts; don’t expect them to cover all possible ways of working.

When you create a new workspace folder, open it in Powershell (with elevated permissions) and run Add-FOPackageSymLinks. Then you can tell F&O to use this folder by running Switch-FOWorkspace. If you want to see which folder is currently in use, call Get-FOWorkspace.

You can also see documentation comments, and the actual implementation, inside D365FOWorkspace.psm1.

Acceptance Test Library

Acceptance Test Library (ATL) in F&O isn’t a new feature, but many people aren’t aware of it, therefore let me try to raise awareness a bit.

ATL is used in automated tests written by developers and its purpose is to easily create test data and verify results.

Here is an example of such a test:

// Create the data root node
var data = AtlDataRootNode::construct();
 
// Get a reference to a well-known warehouse 
var warehouse = data.invent().warehouses().default();
 
// Create a new item with the "default" setup using the item creator class. Adjust the default warehouse before saving the item.
var item = items.defaultBuilder().setDefaultWarehouse(warehouse).create();
 
// Add on-hand (information about availability of the item in the warehouse) by using the on-hand adjustment command.
onHand.adjust().forItem(item).forInventDims([warehouse]).setQty(100).execute();
 
// Create a sales order with one line using the sales order entity
var salesOrder = data.sales().salesOrders().createDefault();
var salesLine = salesOrder.addLine().setItem(item).setQuantity(10).save();
 
// Reserve 3 units of the item using the reserve() command that is exposed directly on the sales line entity
salesLine.reserve().setQty(3).execute();
 
// Verify inventory transactions that are associated with the sales line using the inventoryTransactions query and specifications
salesLine.inventoryTransactions().assertExpectedLines(
    invent.trans().spec().withStatusIssue(StatusIssue::OnOrder).withInventDims([warehouse]).withQty(-7),
    invent.trans().spec().withStatusIssue(StatusIssue::ReservPhysical).withInventDims([warehouse]).withQty(-3));

These few lines do a lot of things – create an item and ensure that it has quantity on hand, create a sales order, run quantity reservation and so on. At the end, they ensure that the expect set of inventory transactions has been created, and the test with fail if more or less lines are created or they don’t have the expected field values. Writing code for that without ATL would require a lot of work.

AX/F&O has a framework for unit tests (SysTest) and that’s where you’ll use Acceptance Test Library, you’ll just create acceptance tests rather then unit tests. Unit tests should test just a single code unit, be very fast and so on, which isn’t the case with ATL, but ATL has other benefits. It allows you to test complete processes and it may be used for testing of code that wasn’t written with unit testing in mind (which is basically all X++ code…). The disadvantage is slower execution, more things (unrelated to what you’re testing) that can break, more difficult identification of the cause of a test failure, and so on.

If you’ve never seen SysTest framework, a simple test class may look like this:

public class MyTest extends SysTestCase
{
    [SysTestMethod]
    public void demo()
    {
        int calculationResult = 1 + 2;
        this.assertEquals(3, calculationResult);
    }
}

The ATL adds special assertions methods such as assertExpectedLines(), but you can utilize the usual assertions of SysTest framework (such as assertEquals()) as well.

You write code in classes and then execute in Test Explorer, where you can see the result, you can easily navigate to or start debugging a particular test.

You can learn more about ALT in documentation, but let me share my real-world experience and a few tips.

Development time

These tests surely require time to write, especially if you’re new to it. Usually the first test for a given use case takes a lot of time and adding more tests is much easier, because they’re just variations of the same thing.

It’s not just about what the test does, but you also need to set up the system correctly, which typically isn’t trivial.

As any other code, test code too may contain bugs and debugging will take time.

Isolation and performance

A great feature of SysTest is data isolation. When you run a test, a new partition is created and your tests run there, therefore your tests can’t be broken by wrong existing data (including those from previous tests), nor the tests can destroy any data you use for manual testing.

But it means that there is no data at all (unless you give up this isolation) and you must prepare everything inside your test case. Of course, the Acceptance Test Library is there to help you. On the other hand, it’s easy to forget some important setup.

This creation of the partition and setting up test data takes time, therefore running these tests takes a few minutes. It’s a bit annoying when you have a single test, but the more tests you have, the more time you’re saving.

Number sequences

One of thing you typically need to set up are number sequences. Fortunately, there is a surprisingly easy solution: decorate your test case class with SysTestCaseAutomaticNumberSequences attribute and the system will create number sequences as needed.

Code samples

F&O comes with a model called Acceptance Test Library – Sample Tests, where you’ll find a few tests that you can review and execute. To see how complete test cases may look like is very useful for learning.

Documentation

Documentation exist: Acceptance test library resources.

You don’t need to read the whole thing to use ATL, but it’s very beneficial if you familiarize yourself with things like Navigation concepts.

You’ll need to go a bit deeper if you decide to create ATL classes for you own entities, or those in the standard application that aren’t covered well by Microsoft. For example, I added ATL classes for trade agreements, because we made significant changes to pricing and utilizing ATL was beneficial.

From another point of view, tests also work as a kind of running documentation. Not only that I document my own code by showing others how it’s supposed to be called and what behaviour we expect, but I sometimes look to ATL to see how Microsoft does certain actions that I need in my real code.

Models and pipelines

You can’t put tests into the same module as your normal code. You’ll need reference to ATL modules (Acceptance Test Library Foundation, at least), which aren’t available in Tier 2+ environments, therefore you will have to configure your build pipeline not to add you test module to deployable packages.

Feeling safe

It’s not specific to tests with ATL, but a great thing of automated tests in general is the level of certainty that my recent changes didn’t break anything. Without automated tests, you either have to spend a lot of time with manual testing (and hope that all tests were executed and interpreted correctly), or you just hope for the best…

DynamicsMinds conference speakers

I’ve just checked the list of sessions proposed for DynamicsMinds conference (22–24 May 2023, Slovenia), where I’ll also have a few, and recognized many familiar names. It’ll be great not only to listen their sessions, but also to finally meet them again in person.

The list is long, but to mention at least some names, there’re going to be several fellow MVPs (such as André Arnaud de Calavon, Paul Heisterkamp or Adrià Ariste Santacreu), ex-MVPs now working for Microsoft (but we still love them :)) like Rachel Profitt, Ludwig Reinhard and Tommy Skaue, the author of d365fo.tools Mötz Jensen, my former colleague Laze Janev and many more.

This is gonna be big.