Your license will expire in N days

You probably all have seen this when opening the classic/development environment.

And you probably all know how to fix it.

But last time I got this, I kept having the message. Almost tore my hair out (luckily I can’t)!

I had uploaded the license and closed and reopened the development environment but the message kept coming back. When I checked the license (Tools=>License), I saw that it was the good license with an expiry date next year. What is going on?


And no, I wasn’t getting confused by the American date format. I know the European dateformat is bad. But the American is even worse. Couldn’t we all agree to the more logical format of YYYY/MM/DD (I don’t care about the separator)? After all, we write “one hundred twenty three” as 123 and not as 321 (European date format) or even worse 231 (American date format).


After checking again and again, closing development environment and stopping all NAV services on the database I was using and then restarting them, I still got that message….

So, I was thinking: it seems NAV decides to show the message BEFORE getting the license from the master-db.

But why and from where?

Hey. Wait a minute! NAV wouldn’t still take the license-file from the subdir where the finsql.exe resides? Would it?

Checking it out and I found it still did!

Someone (don’t ask me why!) had renamed the cronus.flf in the subdir and put our development license in it as fin.flf. And that license was close to the expiry date. I renamed the cronus.flf back to cronus.flf and reopened the client (just checking…). Still the message. So when opening the development environment, NAV searches first for fin.flf and then for cronus.flf. Removing them both blocks the development environment from opening (cronus.flf not found!). Doh!

Putting back the cronus.flf back fixed the problem completely.

Now, this logic was ok when the native database still existed. After all, you could open it without a database server.

For the clean team:

But now that the native database is dead, why should NAV still check if there is a license in the finsql.exe subdirectory? If there is none, just open the development environment.

Shadowing another users session in W2012

If a user has a problem and wants to show you their screen, there are a lot of possibilities: Skype, Team viewer, ………….

But if you are on the same server or domain, it is even easier: you can (or could…) use Remote Desktop Services Manager to do that. Add the server to it if the user’s session you want to see is on another server (same domain!) and then search the users session in the list, right click and select “Remote Control”. Define a key combo to stop seeing the session (my favourite is CTRL+Space).

BUT with Windows 2012, this is not possible anymore, so we can’t do it anymore? Yes we still can but it got more complicated. It has become a 2 step process using good old-style DOS-commands and not point-and-click.

First we need to know which is the users session ID:

Open a DOS-prompt and type this (the first if the user’s session is on the same server. The second is in case the user is on another server):

query user

query user /server:the server name

Now note down the users ID. We need it for the next step.

We will run the remote desktop connection from dos (mstsc.exe) with some different parameters.

mstsc /v:[the servername] /shadow:[the session id] /control

This will open a remote desktop session but it will NOT be a new one, but it will shadow the user’s session after the user has confirmed he wants to share his screen.


Field “Version List” in the Object-Table

Starting with NAV2013, Microsoft increased the size of the “Version List” field after having been hammered for years to increase it but this is not the cause of this topic.

I wanted to check how long the field was (bad memory!) so I checked the NAV virtual Field-table and found something strange (actually 2 things).

First the size was not 250 chars but only 248. Why? I don’t know. Well, 2 chars on 250 won’t make the difference anyway.

BUT what drew my attention was the data type of the field: It wasn’t “Text” like I expected but “OemText”!

I never saw this datatype (ahum … … Remember the bad memory!). So I checked if I could create a field in a new table using this datatype but I couldn’t. It didn’t show up in the list.

Good. Why do they use “OemText” and not “Text”. What is the difference. There must be one. So check the table in SSMS to see if it shows up.

What are the differences between a “OemText” and “Text” in SQL Server?

Well, we know that starting with NAV2013, text-fields are Unicode. That means the nvarchar-datatype in SQL.

But “OemText” is NOT using nvarchar but varchar in SQL! So the versionlist is NOT Unicode!

Oh. Also fields “Company Name”,”Name”,”Locked By” in the Object-table are “OemText”.

Delete a User-record deletes “User Property” and “Access Control” without using C/AL triggers

… and inserting 2000000120:”User”-record will create a 2000000121:”User Property”-record.

I started with the moral of the story. Now I will tell the story.

I had to write an export/import to export/import user info. Exporting went fine but I had problems importing the data.

The “User Security ID” as a primary key instead of the “Windows Security ID” was NOT the problem! It is only annoying not being able to use the primary key to connect the tables, that makes it more fun.

The problem was the error I got on the T2000000121:”User Property” table. The record already existed.

First I thought I was so ‘lucky’ to hit a non-unique GUID (1 chance in a few, so I was already thinking to try playing the lotto [chances are way higher to win with the lotto than hitting a non-unique GUID!]).

Checking that out, I noticed it was not the case and I noticed that records where created in 2000000121:”User Property”! But I wasn’t even using INSERT(TRUE)….. And there is no code in the triggers of table 2000000120:”User” either….

That means that some code is hidden and unreachable behind the table-triggers of table 2000000120:”User” that keeps tables 2000000121:”User Property” and 2000000053:”Access Control” up to date.

Parallel processing in Dynamics NAV

Parallel processing. When we hear that word we think of C# to make that magic happen.

But can we use C/AL for that kind of magic?

No? Think again!

YES! And without .NET tricks! Using pure C/AL!

But do we need it? Well, most of the time we don’t need it at all. After all, it is an ERP. And ERP do short and (hopefully) fast transactions so, yes, we don’t need parallel processing for that.

So when would we need it in NAV?

Sometimes we need a report that needs to read a lot of data and does a lot of processing on it.

An example:

You have a report that:

  1. Needs to read all G/L Entries and do some calculations on it like totalising them.
  2. Also needs to read all Item Ledger Entries and do some calculations on it like totalising them.
  3. Also needs to read all Value Entries and do some calculations on it like totalising them.

Let’s suppose that each of them takes 10 minutes to read and process. That makes 30 minutes of processing. And the service tier is almost doing nothing because it is only 1 session that does the work. This means that session is CPU-core-bound. And SQL Serving is also doing almost nothing because it is waiting for the NAV service tier to consume the data it wants to send to the NAV service tier.

So wouldn’t it be nice if we could run those 3 parts in parallel?

It might not be a good idea to do that during the most busy hours because you might shift the slow-factor from purely CPU-core-bound to NAV-server-memory-bound or SQL Server-capabilities bound or also CPU-bound (having all cores of all CPU’s of the NAV-server to 100%). This would slow down all other operations.

How does it work?

We can do that with the STARTSESSION-command. But won’t we lose time saving the calculated data to disk so the master session can read it? Yes, we would. So we need a better way.

So how can we pass the data from the background session to the master session (or the other way around)?

When you run STARTSESSION, you can pass a record like calling any codeunit passing a record to it.

But did you know that the record can be a temptable? And that you can pass data from the master session to the background session and also back?

You can run as many background sessions as you want (well, I did a small test with starting sessions with loops in it and sleeping most of the time. And I have started 200 without any problems. But probably because they weren’t doing anything.).

So basically what it does is this:

Start all your sessions and save the session ID that NAV gives back in a temptable.

Your main session needs to wait until all other sessions have finished. You do that by checking the “Active Session” table until none of the started sessions still exist in it anymore.

Now your main session will have all data in the temporary tables. So it can do its final processing like putting all the data together.

A small warning:

I did try with having only 1 temptable and using the same temptable in all background sessions and it is effectively using the same temptable in all background sessions. But when I started testing the performance, I noticed that temporary records were lost somewhere in cyberspace and I also could crash the NAV service tier quite easily.

Probably it is because of the implementation of the temptables in NAV. It does not support that different processes are writing to the same temptable structure. Maybe Microsoft will fix that in a future version, but I doubt it.

PARALLEL PROCESSING 20160410.txt (had to rename it to .doc. WordPress doesn’t seem to support .txt……..)

Record parameter in a codeunit is NOT mandatory

I was busy writing some code for a blogpost (not this one) and while testing it, I ran the wrong codeunit and got an error I didn’t expect. Then I noticed that the codeunit I ran was the wrong one.

Immediately after that it hit me that the error was really weird because it was generated by a statement in my C/AL code that expected a value. That value should have been passed by the record-parameter of the codeunit.

Running it immediately from the developer environment I don’t pass any parameter, so I should have received this error (like I have seen so many times I my life (generated this error on NAV2009R2 classic with a codeunit named “Do I need a record-parameter?” 🙂 ) ):


Microsoft Dynamics NAV Classic


The codeunit Do I need a record-parameter? must be called with a Item Ledger Entry record.





So I tried the debugger just to be sure, and indeed the code was running until it hit the error caused by the missing parameter.

So back to NAV2009R2 to be sure of the error it should generate. Indeed I got the error.

Back to NAV2013R2. Ran the codeunit that receives a parameter and it didn’t give an error but just ran the code (just a MESSAGE(‘Ok’); in it…). But it ran!

Tried also on NAV2016 and also there it ran without error.

Tried also on NAV2013 and also there it ran without error.

Didn’t test NAV2015 but I suppose it will work there too.


Last test to be sure: checking my bedroom to see if I wasn’t still sleeping and dreaming. Nop. I am awake!



Starting with NAV2013, a codeunit that has a parameter in the OnRun-method can also be run WITHOUT giving the parameter! The parameter-record will be empty when entering the OnRun-method.

Is it still useful to buffer records in a temptable?

Maybe some of you remember my how-to increase performance by reducing db reads.

I wrote it in 2007 when we still had the native DB and also the SQL DB but NOT the service tier with all its extra’s for performance.

Basically it improves the reading speed of this kind of code:






UNTIL SalesLine.NEXT = 0;

Meaning for each SalesLine-record you go back to the DB and ask it again for the item even if you already received that same item thousands of time. You should NOT harass the database like that with those puny requests. He has better things to do!

But now in 2016 with NAV2016 out in the wild (I should have tested this when NAV2013 came out but …….… No comment please……), I was wondering if my trick was still valid or not.

Considering that now all code is run on the service tier instead of on the client and the service tier has its own cache shared between all its connected clients, I thought that it might have become useless. Maybe it even hinders performance because of the extra overhead. So (finally) time to put it to the test.

So I created a codeunit to run different tests and run it multiple times. The following is a typical output (for my machine and data [not a CRONUS DB]):

Reread worst-case:0.651 seconds

Reread random-case:0.634 seconds

Reread best-case:0.589 seconds

Construct-TT worst-case:4.857 seconds

Construct-TT random-case:3.108 seconds

Construct-TT best-case:0.317 seconds

-“Reread” means I always do a ItemREC.GET() even if the record-variable has already the correct item record.

-“Construct-TT” means I have a function GetItem where I put some code that constructs a temptable on the fly with the items it had to get from the database so that the next read it I could just get it from the temptable (or in the best case I already have it in the variable).

-“Worst-case” means that every time the item-record I have last read is different from the one I need now.

-“Best-Case” means that most of the time the item-record I have last read is the same as the one I need now.

-“Random-case” means I do some pseudo-random access with a high number of cases that the last read item-record is the same as the one I need now.

So what does these numbers tell us?

In the reread-scenario’s it seems that the performance doesn’t change a lot. This is what I expected because once a record has been read, a reread does not result in a query to SQL Server but the record gets taken from the service tier cache (or maybe the C# code behind it sees that it already is in the variable and does not even get it from the cache. If this is not the case, it might be an extra optimization for Microsoft to implement). So this is good!

In the Construct-TT scenario’s we see big differences. In the worst-case and random-case now perform (a lot) worse than on the classic client. The best-case is better than the reread-scenarios but only by half. The “Construct-TT worst-case” and the “Construct-TT random-case” perform a lot worse than the reread=scenario’s.


CONCLUSION: While looping your sales lines, just do a GET of your item for each sales line. The only improvement you might introduce is { IF ItemREC.”No.” <> SalesLineREC.”No.” THEN ItemREC.GET(SalesLineREC.”No.”). } You might get some better performance than with just a { ItemREC.GET(SalesLineREC.”No.”) }. I didn’t check it but I don’t think it will be that much.