Your license will expire in N days

You probably all have seen this when opening the classic/development environment.

And you probably all know how to fix it.

But last time I got this, I kept having the message. Almost tore my hair out (luckily I can’t)!

I had uploaded the license and closed and reopened the development environment but the message kept coming back. When I checked the license (Tools=>License), I saw that it was the good license with an expiry date next year. What is going on?


And no, I wasn’t getting confused by the American date format. I know the European dateformat is bad. But the American is even worse. Couldn’t we all agree to the more logical format of YYYY/MM/DD (I don’t care about the separator)? After all, we write “one hundred twenty three” as 123 and not as 321 (European date format) or even worse 231 (American date format).


After checking again and again, closing development environment and stopping all NAV services on the database I was using and then restarting them, I still got that message….

So, I was thinking: it seems NAV decides to show the message BEFORE getting the license from the master-db.

But why and from where?

Hey. Wait a minute! NAV wouldn’t still take the license-file from the subdir where the finsql.exe resides? Would it?

Checking it out and I found it still did!

Someone (don’t ask me why!) had renamed the cronus.flf in the subdir and put our development license in it as fin.flf. And that license was close to the expiry date. I renamed the cronus.flf back to cronus.flf and reopened the client (just checking…). Still the message. So when opening the development environment, NAV searches first for fin.flf and then for cronus.flf. Removing them both blocks the development environment from opening (cronus.flf not found!). Doh!

Putting back the cronus.flf back fixed the problem completely.

Now, this logic was ok when the native database still existed. After all, you could open it without a database server.

For the clean team:

But now that the native database is dead, why should NAV still check if there is a license in the finsql.exe subdirectory? If there is none, just open the development environment.

Sponsored Post Learn from the experts: Create a successful blog with our brand new courseThe Blog is excited to announce our newest offering: a course just for beginning bloggers where you’ll learn everything you need to know about blogging from the most trusted experts in the industry. We have helped millions of blogs get up and running, we know what works, and we want you to to know everything we know. This course provides all the fundamental skills and inspiration you need to get your blog started, an interactive community forum, and content updated annually.

Shadowing another users session in W2012

If a user has a problem and wants to show you their screen, there are a lot of possibilities: Skype, Team viewer, ………….

But if you are on the same server or domain, it is even easier: you can (or could…) use Remote Desktop Services Manager to do that. Add the server to it if the user’s session you want to see is on another server (same domain!) and then search the users session in the list, right click and select “Remote Control”. Define a key combo to stop seeing the session (my favourite is CTRL+Space).

BUT with Windows 2012, this is not possible anymore, so we can’t do it anymore? Yes we still can but it got more complicated. It has become a 2 step process using good old-style DOS-commands and not point-and-click.

First we need to know which is the users session ID:

Open a DOS-prompt and type this (the first if the user’s session is on the same server. The second is in case the user is on another server):

query user

query user /server:the server name

Now note down the users ID. We need it for the next step.

We will run the remote desktop connection from dos (mstsc.exe) with some different parameters.

mstsc /v:[the servername] /shadow:[the session id] /control

This will open a remote desktop session but it will NOT be a new one, but it will shadow the user’s session after the user has confirmed he wants to share his screen.


Field “Version List” in the Object-Table

Starting with NAV2013, Microsoft increased the size of the “Version List” field after having been hammered for years to increase it but this is not the cause of this topic.

I wanted to check how long the field was (bad memory!) so I checked the NAV virtual Field-table and found something strange (actually 2 things).

First the size was not 250 chars but only 248. Why? I don’t know. Well, 2 chars on 250 won’t make the difference anyway.

BUT what drew my attention was the data type of the field: It wasn’t “Text” like I expected but “OemText”!

I never saw this datatype (ahum … … Remember the bad memory!). So I checked if I could create a field in a new table using this datatype but I couldn’t. It didn’t show up in the list.

Good. Why do they use “OemText” and not “Text”. What is the difference. There must be one. So check the table in SSMS to see if it shows up.

What are the differences between a “OemText” and “Text” in SQL Server?

Well, we know that starting with NAV2013, text-fields are Unicode. That means the nvarchar-datatype in SQL.

But “OemText” is NOT using nvarchar but varchar in SQL! So the versionlist is NOT Unicode!

Oh. Also fields “Company Name”,”Name”,”Locked By” in the Object-table are “OemText”.

Delete a User-record deletes “User Property” and “Access Control” without using C/AL triggers

… and inserting 2000000120:”User”-record will create a 2000000121:”User Property”-record.

I started with the moral of the story. Now I will tell the story.

I had to write an export/import to export/import user info. Exporting went fine but I had problems importing the data.

The “User Security ID” as a primary key instead of the “Windows Security ID” was NOT the problem! It is only annoying not being able to use the primary key to connect the tables, that makes it more fun.

The problem was the error I got on the T2000000121:”User Property” table. The record already existed.

First I thought I was so ‘lucky’ to hit a non-unique GUID (1 chance in a few, so I was already thinking to try playing the lotto [chances are way higher to win with the lotto than hitting a non-unique GUID!]).

Checking that out, I noticed it was not the case and I noticed that records where created in 2000000121:”User Property”! But I wasn’t even using INSERT(TRUE)….. And there is no code in the triggers of table 2000000120:”User” either….

That means that some code is hidden and unreachable behind the table-triggers of table 2000000120:”User” that keeps tables 2000000121:”User Property” and 2000000053:”Access Control” up to date.

Parallel processing in Dynamics NAV

Parallel processing. When we hear that word we think of C# to make that magic happen.

But can we use C/AL for that kind of magic?

No? Think again!

YES! And without .NET tricks! Using pure C/AL!

But do we need it? Well, most of the time we don’t need it at all. After all, it is an ERP. And ERP do short and (hopefully) fast transactions so, yes, we don’t need parallel processing for that.

So when would we need it in NAV?

Sometimes we need a report that needs to read a lot of data and does a lot of processing on it.

An example:

You have a report that:

  1. Needs to read all G/L Entries and do some calculations on it like totalising them.
  2. Also needs to read all Item Ledger Entries and do some calculations on it like totalising them.
  3. Also needs to read all Value Entries and do some calculations on it like totalising them.

Let’s suppose that each of them takes 10 minutes to read and process. That makes 30 minutes of processing. And the service tier is almost doing nothing because it is only 1 session that does the work. This means that session is CPU-core-bound. And SQL Serving is also doing almost nothing because it is waiting for the NAV service tier to consume the data it wants to send to the NAV service tier.

So wouldn’t it be nice if we could run those 3 parts in parallel?

It might not be a good idea to do that during the most busy hours because you might shift the slow-factor from purely CPU-core-bound to NAV-server-memory-bound or SQL Server-capabilities bound or also CPU-bound (having all cores of all CPU’s of the NAV-server to 100%). This would slow down all other operations.

How does it work?

We can do that with the STARTSESSION-command. But won’t we lose time saving the calculated data to disk so the master session can read it? Yes, we would. So we need a better way.

So how can we pass the data from the background session to the master session (or the other way around)?

When you run STARTSESSION, you can pass a record like calling any codeunit passing a record to it.

But did you know that the record can be a temptable? And that you can pass data from the master session to the background session and also back?

You can run as many background sessions as you want (well, I did a small test with starting sessions with loops in it and sleeping most of the time. And I have started 200 without any problems. But probably because they weren’t doing anything.).

So basically what it does is this:

Start all your sessions and save the session ID that NAV gives back in a temptable.

Your main session needs to wait until all other sessions have finished. You do that by checking the “Active Session” table until none of the started sessions still exist in it anymore.

Now your main session will have all data in the temporary tables. So it can do its final processing like putting all the data together.

A small warning:

I did try with having only 1 temptable and using the same temptable in all background sessions and it is effectively using the same temptable in all background sessions. But when I started testing the performance, I noticed that temporary records were lost somewhere in cyberspace and I also could crash the NAV service tier quite easily.

Probably it is because of the implementation of the temptables in NAV. It does not support that different processes are writing to the same temptable structure. Maybe Microsoft will fix that in a future version, but I doubt it.

PARALLEL PROCESSING 20160410.txt (had to rename it to .doc. WordPress doesn’t seem to support .txt……..)

Record parameter in a codeunit is NOT mandatory

I was busy writing some code for a blogpost (not this one) and while testing it, I ran the wrong codeunit and got an error I didn’t expect. Then I noticed that the codeunit I ran was the wrong one.

Immediately after that it hit me that the error was really weird because it was generated by a statement in my C/AL code that expected a value. That value should have been passed by the record-parameter of the codeunit.

Running it immediately from the developer environment I don’t pass any parameter, so I should have received this error (like I have seen so many times I my life (generated this error on NAV2009R2 classic with a codeunit named “Do I need a record-parameter?” 🙂 ) ):


Microsoft Dynamics NAV Classic


The codeunit Do I need a record-parameter? must be called with a Item Ledger Entry record.





So I tried the debugger just to be sure, and indeed the code was running until it hit the error caused by the missing parameter.

So back to NAV2009R2 to be sure of the error it should generate. Indeed I got the error.

Back to NAV2013R2. Ran the codeunit that receives a parameter and it didn’t give an error but just ran the code (just a MESSAGE(‘Ok’); in it…). But it ran!

Tried also on NAV2016 and also there it ran without error.

Tried also on NAV2013 and also there it ran without error.

Didn’t test NAV2015 but I suppose it will work there too.


Last test to be sure: checking my bedroom to see if I wasn’t still sleeping and dreaming. Nop. I am awake!



Starting with NAV2013, a codeunit that has a parameter in the OnRun-method can also be run WITHOUT giving the parameter! The parameter-record will be empty when entering the OnRun-method.

Is it still useful to buffer records in a temptable?

Maybe some of you remember my how-to increase performance by reducing db reads.

I wrote it in 2007 when we still had the native DB and also the SQL DB but NOT the service tier with all its extra’s for performance.

Basically it improves the reading speed of this kind of code:






UNTIL SalesLine.NEXT = 0;

Meaning for each SalesLine-record you go back to the DB and ask it again for the item even if you already received that same item thousands of time. You should NOT harass the database like that with those puny requests. He has better things to do!

But now in 2016 with NAV2016 out in the wild (I should have tested this when NAV2013 came out but …….… No comment please……), I was wondering if my trick was still valid or not.

Considering that now all code is run on the service tier instead of on the client and the service tier has its own cache shared between all its connected clients, I thought that it might have become useless. Maybe it even hinders performance because of the extra overhead. So (finally) time to put it to the test.

So I created a codeunit to run different tests and run it multiple times. The following is a typical output (for my machine and data [not a CRONUS DB]):

Reread worst-case:0.651 seconds

Reread random-case:0.634 seconds

Reread best-case:0.589 seconds

Construct-TT worst-case:4.857 seconds

Construct-TT random-case:3.108 seconds

Construct-TT best-case:0.317 seconds

-“Reread” means I always do a ItemREC.GET() even if the record-variable has already the correct item record.

-“Construct-TT” means I have a function GetItem where I put some code that constructs a temptable on the fly with the items it had to get from the database so that the next read it I could just get it from the temptable (or in the best case I already have it in the variable).

-“Worst-case” means that every time the item-record I have last read is different from the one I need now.

-“Best-Case” means that most of the time the item-record I have last read is the same as the one I need now.

-“Random-case” means I do some pseudo-random access with a high number of cases that the last read item-record is the same as the one I need now.

So what does these numbers tell us?

In the reread-scenario’s it seems that the performance doesn’t change a lot. This is what I expected because once a record has been read, a reread does not result in a query to SQL Server but the record gets taken from the service tier cache (or maybe the C# code behind it sees that it already is in the variable and does not even get it from the cache. If this is not the case, it might be an extra optimization for Microsoft to implement). So this is good!

In the Construct-TT scenario’s we see big differences. In the worst-case and random-case now perform (a lot) worse than on the classic client. The best-case is better than the reread-scenarios but only by half. The “Construct-TT worst-case” and the “Construct-TT random-case” perform a lot worse than the reread=scenario’s.


CONCLUSION: While looping your sales lines, just do a GET of your item for each sales line. The only improvement you might introduce is { IF ItemREC.”No.” <> SalesLineREC.”No.” THEN ItemREC.GET(SalesLineREC.”No.”). } You might get some better performance than with just a { ItemREC.GET(SalesLineREC.”No.”) }. I didn’t check it but I don’t think it will be that much.

Debug a needle in a haystack

You probably know the problem. You have to find where some field is updated in the application and you haven’t the slightest idea where the crime is happening. This can easily happen with big add-ons that you don’t know and if things really go bad, there might be multiple places where the table (and the incriminating field) is updated but you don’t know where and there are also a lot of places where the table and field are referenced so searching for “Record xxxx” is long.

My old trick was to rename the field to something that was definitely unique in the database like “Incriminating Field”. Export all objects as text and search for it. This could get quite long especially if you have to filter out objects for which you don’t have read-permissions. You get the error only when NAV tries to export it. Generally after exporting all the rest.

Now starting with NAV2013, there is another option using the debugger in a smart way. No, it doesn’t mean doing step-by-step debugging until you find the offending code.

This is the way I do it:

First enable change log and enable the logging for insert/modify of the incriminating table in which the incriminating field is hiding and do it ONLY FOR THAT TABLE!

Restart the client sessions to enable the logging (NAV runs function C1.GetDatabaseTableTriggerSetup() only once per table to check if it needs to run the other triggers or not. This function is executed the first time an insert/Modify/Delete/Rename is run.

Put a breakpoint in codeunit 423:”Change Log Management”.LogModification (or LogInsertion).

Let the debugger get to the function.

Put a watch on “RecRef.Fields.YourField” and “xRecRef.Fields.YourField”. When the break stops there, you can easily see the old and new values of the field and decide if it is the change you were looking for. In the call stack you can see what objects and functions have been called and where the value was changed.

You might also put some code in the function to put the record in a variable and then put a conditional break on it but this means you have to change the code which I prefer to avoid.

Really big temptable in PRE-NAV2013 and NAV2013+

I have to admit I had some doubts about the title. I was first thinking about “An attempt was made to move the file pointer before the beginning of the file.” but that would exclude a little bit also referring to NAV2013+ (I explain this in the second part of this blogpost). And I was more interested in what would happen in NAV2013+ with a 64 bit servicetier created from scratch.

So this blogpost has to subtopics in it: One talking about PRE-NAV2013 and another about NAV2013+.

  • PRE-NAV2013 (tested with NAV2009R2)

If you have a really big temptable, at a certain point you might hit this error:


Microsoft Dynamics NAV Classic


The operating system returned the error (131):

An attempt was made to move the file pointer before the beginning of the file.


Why? What does this mean?

First and for all. A temptable starts in memory and when it gets a bit bigger, it is written to a file. How big? I don’t know, but this is not important here.

PRE-NAV2013 is using a 4 byte signed integer to keep track the size of the file. Why not a biginteger? That is an inheritance of the old FAT ( and DOS ( Basically, a file was limited to max 2GB of size. And a 4 byte integer goes from −2 147 483 648 to 2 147 483 647, so a perfect match.

But with the later FAT implementations or NTFS (, we don’t have that 2G file limit anymore. But it is still there in PRE-NAV2013.

So when NAV is writing to the file, at a certain point it gets over the 2GB size and the integer used to track the size becomes negative, generating the above error (and crashing the native client, but that is not important here [well… kind of…]).

To create the problem, I have created a table with an integer as primary key and 4 Text250-fields. The following code fills up the temptable until it hits the error. The same code I also used in NAV2013+.

This is the code I used to fill up the temptable. I put the code in a codeunit.


t99915.t1 := PADSTR(”,250,’x’);

t99915.t2 := t99915.t1;

t99915.t3 := t99915.t1;

t99915.t4 := t99915.t1;

FOR i := 1 TO 5000000 DO BEGIN

IF i MOD 1000 = 0 THEN

dia.UPDATE(1,i); := i;





How to avoid this error? There are a few things you can try:

-If you are using a standard table, you can create your own table tailored to your needs. It is not necessary to put it in the license if you only use the table only as a temptable. Put only the fields in it that you need and only the sizes you need and limit SIFT-fields and extra keys as much as possible.

-Divide the records in the temptable over multiple temptables (Arrays on a temptable do not help because it points to the same temptable and not to different ones). Each temptable has its own file in the operating system, so you have multiple times 2 GB of data.

This was tested on a classic client and not on the 2009R2 RTC but I think it has the same problem.

  • NAV2013+

Does NAV2013+ still have this problem? The short answer is no. And I could finish the blogpost now, but I want to point out some details on what is going on.

-First good surprise is that it is a lot faster than with the older versions. This is because it does not write the data to disk, but keeps it in memory.

-Next good thing is that I don’t get the error anymore like I expected because it is all in memory now.

-But because everything is in memory, the service tier is gobbling up your precious memory. So you might run into out-of-memory issues. But on the positive side I did notice that the memory structure of a NAV2013+ temptable is more memory efficient than the file structure of a PRE-NAV2013 temptable. Meaning that a 2 GB file-structure in PRE-NAV2013 does NOT take 2 GB of memory in NAV2013+. I didn’t measure it scientifically but my guess is that it takes around 600MB to 1000 MB of memory instead of 2GB.

One other interesting observation I found is this:

Running the codeunit multiple times, memory increases to a few GB, but suddenly it releases memory to get under 1 GB. Looks like garbage collection kicking in.

Timestamp field in SQL and NAV2016

In SQL you have the timestamp field. Each table in NAV has it, but it is not shown in NAV. In NAV2016 you have the possibility to show it as you probably already found out.

Important to know is that the timestamp field does NOT contain a time in it (talking about a bad name for a field…). It is just some versioning ID. Each time a record is inserted/updated, it the value is increased by 1. But not necessary for the same record. I have noticed that a value “x” is used only once for a record in the whole database! This means that there is max 1 record in the whole database with a certain timestamp value (if you update the record, it will get a new timestamp value and the old timestamp value will disappear from the database).

So when you need to know the records that where changed, get the records with a timestamp larger than the largest one that you found last time (you will need to save that value somewhere).

I did a small test:

I created a key on the timestamp-field (I gave it a different name (not “timestamp”)) and that was not a problem.

Not having a key on the field will mean that SQL has to scan the whole table to find the records you are searching for.