Happy New Year 2024

Happy New Year! (To everyone who follows Gregorian calendar :)).

Let me look a bit at what I did in 2023 and what I expect in 2024.

I keep working for a large end-user company. I do a lot of different things there (X++, Azure Functions, code reviews, Powershell, DevOps processes and so on), but a lot of that is focused on the single company and it’s not to be shared with the community.

A new Dynamics community site was launched. It’s developed in a different way, which should make its development more agile. On the other hand, there are still quite a few bugs and missing features and their resolution has been slow so far. But progress is being made, so hopefully we’ll have a much better site at the end of 2024.

This year, unified development experience for F&O will become generally available. I still haven’t got chance to play with it too much, which is a pity. I’m keen to, hopefully it’ll change soon. Also, changes in storage pricing are expected in near future, which is an important topic for the new development/admin experience.

We can expect more and more convergence and integration of F&O and Power Platform. I made just a few Power Apps and flows in 2023, let’s see if there will be more in 2024.

In March, I’m going to attend Microsoft MVP Summit in Redmond. It’s a conference for MVPs and Microsoft employees, where MVPs have a chance to meet each other and Microsoft product teams, learn about upcoming features, provide feedback and so on. The social part is important; there are people that I’d have never met in person if I didn’t attend MVP Summits. There were no summits in the era of Covid and I skipped it last year, therefore it’ll be my first summit since 2019.

In May, I’m going to DynamicsMinds conference in Slovenia. I gave a few talks there last year and it was a great event, so I’m happy to come again. I hope to see some familiar faces there!

Query/QueryRun with temporary tables (AX/F&O)

I noticed that some developers believe that Query* classes can’t be used to query temporary tables. It is possible; it just requires an extra step that isn’t needed with regular tables.

When working with temporary tables, each buffer (variable) of the same table can refer to a different set of temporary data. Therefore using the right buffer (reference to a particular data set) is crucial. This is true for select statements in code, form data sources, and for Query* classes as well.

If you want to query temporary tables with Query* classes, you’ll define a query (with classes like Query and QueryBuildDataSource) in exactly the same way as with regular tables. The place where you must pass references to temporary data sets is an instance of QueryRun class, namely setCursor() (or setRecord()) method.

If the query uses several temporary tables, simply call setCursor() several times – the system will find the data source for the given table. The method also has an extra parameter (_occurrence) for the case when you have multiple data sources for the same table.

Here is a complete example using standard tables, therefore anyone can simply copy and run it. It shows all steps – inserting data to temporary tables, creating a query, passing temporary tables to a QueryRun object, running the query and showing returned records. It uses F&O syntax, but the overall approach is the same in Dynamics AX too.

TmpTableName name;
 
name.RefTableId = 1;
name.TableName = 'TableA';
name.insert();
 
TmpTableIdMap map;
map.MainTableId = 1;
map.MainFieldId = 42;
map.insert();
 
Query query = new Query();
QueryBuildDataSource nameDs = query.addDataSource(tableNum(TmpTableName));
 
QueryBuildDataSource mapDs = nameDs.addDataSource(tableNum(TmpTableIdMap));
mapDs.addLink(fieldNum(TmpTableName, RefTableId), fieldNum(TmpTableIdMap, MainTableId));
 
QueryRun qr = new QueryRun(query);
qr.setCursor(name);
qr.setCursor(map);
 
while (qr.next())
{
    TmpTableName nameFetched = qr.get(tableNum(TmpTableName));
    TmpTableIdMap mapFetched = qr.get(tableNum(TmpTableIdMap));
 
    info(strFmt('%1 - %2', nameFetched.TableName, mapFetched.MainFieldId));
}

Public preview of the new F&O dev experience

Something that was talked about for a long time is getting closer to reality. F&O development will no longer require those huge VMs with SQL Server and everything. Instead, you’ll just install some VS extensions, connect to Dataverse, download F&O code and metadata and start developing. The runtime (web server, database) will be in cloud, which means that running and debugging your changes requires deploying them to an F&O environment in cloud.

Microsoft calls it a unified experience because it’s going in the direction used by other Dynamics products, it utilizes Power Platform and it provides tighter and easier integration between F&O and Power Platform.

Here is a brief introduction by Peter Villadsen: The Public Preview for the Unified Experience is live!. It contains links to the documentation with more details (which is also in preview).

I’m looking forward to the extra flexibility provided by the new solution. It should also have a positive impact on costs, because development teams won’t need so many powerful VMs (although details also also depends on things like Dataverse pricing). My recommendation is trying it and getting familiar with the new approach, but not hurrying with the real adoption, because changes are expected before it becomes generally available.

Monitoring and telemetry in F&O

Applications Insights in an Azure service for monitoring of applications. Many Azure Services support it out of the box – you just connect, say, an Azure Function to Application Insights and it’ll automatically start collecting information about performance, failed requests and so on.

Using Application Insights API from D365FO is possible and several people in past showed custom solutions for that. But now there is also a solution from Microsoft included in F&O out of the box – it’s called Monitoring and telemetry.

You can find quite a few blog posts about the setup, such as this one, therefore I’m not going to duplicate it here.

But if you aren’t familiar with Application Insights / Azure Monitor, let me give you one example how it can be useful. Support personnel and developers are often interested in details of exceptions thrown in the application.

If enabled in F&O, Application Insights automatically collects information about such exceptions. You can see on overview of exceptions in a certain period:

you can query logs for particular exceptions and you can see a lot of details of an individual exception, including X++ call stack:

Notice also the actions available to you, such as the option to create a work item (in Azure DevOps or GitHub) or to see all available telemetry for the user session.

Note that you can use it in all types of environments. It’s most important in production, because you can’t debug there, but collecting extra data is useful in development and test environments too.

Custom messages

Some information, such as exceptions and form access, are collected by F&O automatically (if enabled).

But you also use code to log any information that it’s important for you, such as when something interesting happens, to provide trace message with extra context for debugging and to log custom metrics.

I didn’t particularly like the solution I saw in 10.0.31 and 32, because it didn’t support custom properties, but it has changed. The support was added somewhere between 10.0.32 and 10.0.35.

Let me give you an example. Here I’m adding a trace message from X++, but instead of adding just the message itself, I’m also adding two custom properties:

Map properties = new Map(Types::String, Types::String);
properties.add('Feature', 'My feature 1');
properties.add('MyCorrelationId', '123456');
 
SysGlobalTelemetry::logTraceWithCustomProperties('Custom message from X++', properties)

When looking into logs, I can see not just the textual message, but also my properties as structured data.

And what is even more important, I can easily use properties in filters and graphs. For example:

AppTraces | where Properties.Feature == 'My feature 1'

F&O development with multiple version control workspaces

The usual setup (documented by Microsoft) of version control for F&O is using Team Foundation Version Control and configuring the workspace mapping like this:

$/MyProject/Trunk/Main/Metadatak:\AosService\PackagesLocalDirectory
$/MyProject/Trunk/Main/Projectsc:\Users\Admin123456789\Documents\Visual Studio 2019\Projects

This means that PackagesLocalDirectory contains both standard packages (such as ApplicationSuite) and custom packages, i.e. your development and ISV solutions you’ve installed.

This works fine if you always want to work with just a single version of your application, but the need to switch to a different branch or a different version is quite common. For example:

  • You merged code to a test branch and want to compile it and test it.
  • You found a bug in production or a test environment and you want to debug code of that version, or even make a fix.
  • As a VAR, you want to develop code for several clients in on the same DEV box.

It’s possible to change the workspace to use a different version of code, but it’s a lot of work. For example, let’s say you want to switch from Dev branch to Test to debug something. You have to:

  1. Suspend pending changes
  2. Delete workspace mapping
  3. Ideally, delete all custom files from PackagesLocalDirectory (otherwise you have to later deal with extra files that exist in Dev branch but not Test)
  4. Create a new workspace mapping
  5. Get latest code
  6. Compile custom packages (because source control contains just source code, not runnable binaries)

When you’re done and want to continue with the Dev branch, you need to repeat all the steps again to switch back.

This is very inflexible, time-consuming, and it doesn’t allow you to have multiple sets of pending changes. Therefore I use a different approach – I utilize multiple workspaces and switch between them.

I never put any custom packages to PackagesLocalDirectory. Instead, I create a separate folder for my workspace, e.g. k:\Repo\Dev. Then PackagesLocalDirectory contains only code from Microsoft, k:\Repo\Dev folder contains code from Dev branch, k:\Repo\Test code from Test branch and so on.

Each workspace has not just its version of the application, but also a list of pending changes. It also contains binaries – I need to compile code once when creating the workspace, but not again when switching between workspaces.

F&O doesn’t have to always use PackagesLocalDirectory and Visual Studio doesn’t have to have projects in the default location (such as Documents\Visual Studio 2019\Projects). The paths can be changed in configuration files. Therefore if I want to use, say, the Test workspace, I tell F&O to take packages from k:\Repo\Dev\Metadata and Visual Studio to use k:\Repo\Dev\Projects for projects. Doing it manually would be time-consuming and error-prone, therefore I do it by a script (see more about the scripts at the end).

If my workspace folder contains just custom packages, I can use it for things like code merge, but I couldn’t run the application or even open Application Explorer, because custom code depend on standard code, which isn’t there. I could copy standard packages to each workspace folder, but I like the separation, it would require a lot of time and disk space and I would have to update each folder after updating the standard application.

For example, I could take k:\AosService\PackagesLocalDirectory\ApplicationPlatform and copy it to k:\Repo\Dev\Metadata\ApplicationPlatform. But I can achieve the same goal by creating a symbolic link to the folder in PackagesLocalDirectory. Of course, no one wants to add 150 symbolic links manually – a simple script can iterate the folders and create a symbolic link for each of them.

A few more remarks

  • We’re usually interested in the single workspace used by F&O, but note that some actions can be done without switching workspaces. For example, we can use Get Latest to download latest code from source control (it’s even better if the folder isn’t used by F&O/VS, because no files are locked), merge code and commit changes, or even build the application from command line.
  • If you use Git, branch switching is easier, but you’ll still likely want to keep standard and custom packages in separate folders.
  • Before applying an application update from Microsoft, it’s better to tell F&O to use PackagesLocalDirectory. If you don’t do it and there is a new package, it’ll be created in the active workspace and other workspaces couldn’t see it. You’d have to identify the problem and move the new package to PackagesLocalDirectory.
  • If a new package is added, you’ll also need to regenerate symbolic links for your workspaces.
  • You can have multiple workspaces for a single branch. For example, I use a separate workspace for code reviews, so I don’t have to suspend the development I’m working on.

Scripts

The scripts are available on GitHub: github.com/goshoom/d365fo-workspace. Use them as a base or inspiration for your own scripts; don’t expect them to cover all possible ways of working.

When you create a new workspace folder, open it in Powershell (with elevated permissions) and run Add-FOPackageSymLinks. Then you can tell F&O to use this folder by running Switch-FOWorkspace. If you want to see which folder is currently in use, call Get-FOWorkspace.

You can also see documentation comments, and the actual implementation, inside D365FOWorkspace.psm1.