Category Archives: Uncategorized

Field “Version List” in the Object-Table

Starting with NAV2013, Microsoft increased the size of the “Version List” field after having been hammered for years to increase it but this is not the cause of this topic.

I wanted to check how long the field was (bad memory!) so I checked the NAV virtual Field-table and found something strange (actually 2 things).

First the size was not 250 chars but only 248. Why? I don’t know. Well, 2 chars on 250 won’t make the difference anyway.

BUT what drew my attention was the data type of the field: It wasn’t “Text” like I expected but “OemText”!

I never saw this datatype (ahum … … Remember the bad memory!). So I checked if I could create a field in a new table using this datatype but I couldn’t. It didn’t show up in the list.

Good. Why do they use “OemText” and not “Text”. What is the difference. There must be one. So check the table in SSMS to see if it shows up.

What are the differences between a “OemText” and “Text” in SQL Server?

Well, we know that starting with NAV2013, text-fields are Unicode. That means the nvarchar-datatype in SQL.

But “OemText” is NOT using nvarchar but varchar in SQL! So the versionlist is NOT Unicode!

Oh. Also fields “Company Name”,”Name”,”Locked By” in the Object-table are “OemText”.


Parallel processing in Dynamics NAV

Parallel processing. When we hear that word we think of C# to make that magic happen.

But can we use C/AL for that kind of magic?

No? Think again!

YES! And without .NET tricks! Using pure C/AL!

But do we need it? Well, most of the time we don’t need it at all. After all, it is an ERP. And ERP do short and (hopefully) fast transactions so, yes, we don’t need parallel processing for that.

So when would we need it in NAV?

Sometimes we need a report that needs to read a lot of data and does a lot of processing on it.

An example:

You have a report that:

  1. Needs to read all G/L Entries and do some calculations on it like totalising them.
  2. Also needs to read all Item Ledger Entries and do some calculations on it like totalising them.
  3. Also needs to read all Value Entries and do some calculations on it like totalising them.

Let’s suppose that each of them takes 10 minutes to read and process. That makes 30 minutes of processing. And the service tier is almost doing nothing because it is only 1 session that does the work. This means that session is CPU-core-bound. And SQL Serving is also doing almost nothing because it is waiting for the NAV service tier to consume the data it wants to send to the NAV service tier.

So wouldn’t it be nice if we could run those 3 parts in parallel?

It might not be a good idea to do that during the most busy hours because you might shift the slow-factor from purely CPU-core-bound to NAV-server-memory-bound or SQL Server-capabilities bound or also CPU-bound (having all cores of all CPU’s of the NAV-server to 100%). This would slow down all other operations.

How does it work?

We can do that with the STARTSESSION-command. But won’t we lose time saving the calculated data to disk so the master session can read it? Yes, we would. So we need a better way.

So how can we pass the data from the background session to the master session (or the other way around)?

When you run STARTSESSION, you can pass a record like calling any codeunit passing a record to it.

But did you know that the record can be a temptable? And that you can pass data from the master session to the background session and also back?

You can run as many background sessions as you want (well, I did a small test with starting sessions with loops in it and sleeping most of the time. And I have started 200 without any problems. But probably because they weren’t doing anything.).

So basically what it does is this:

Start all your sessions and save the session ID that NAV gives back in a temptable.

Your main session needs to wait until all other sessions have finished. You do that by checking the “Active Session” table until none of the started sessions still exist in it anymore.

Now your main session will have all data in the temporary tables. So it can do its final processing like putting all the data together.

A small warning:

I did try with having only 1 temptable and using the same temptable in all background sessions and it is effectively using the same temptable in all background sessions. But when I started testing the performance, I noticed that temporary records were lost somewhere in cyberspace and I also could crash the NAV service tier quite easily.

Probably it is because of the implementation of the temptables in NAV. It does not support that different processes are writing to the same temptable structure. Maybe Microsoft will fix that in a future version, but I doubt it.

PARALLEL PROCESSING 20160410.txt (had to rename it to .doc. WordPress doesn’t seem to support .txt……..)

Debug a needle in a haystack

You probably know the problem. You have to find where some field is updated in the application and you haven’t the slightest idea where the crime is happening. This can easily happen with big add-ons that you don’t know and if things really go bad, there might be multiple places where the table (and the incriminating field) is updated but you don’t know where and there are also a lot of places where the table and field are referenced so searching for “Record xxxx” is long.

My old trick was to rename the field to something that was definitely unique in the database like “Incriminating Field”. Export all objects as text and search for it. This could get quite long especially if you have to filter out objects for which you don’t have read-permissions. You get the error only when NAV tries to export it. Generally after exporting all the rest.

Now starting with NAV2013, there is another option using the debugger in a smart way. No, it doesn’t mean doing step-by-step debugging until you find the offending code.

This is the way I do it:

First enable change log and enable the logging for insert/modify of the incriminating table in which the incriminating field is hiding and do it ONLY FOR THAT TABLE!

Restart the client sessions to enable the logging (NAV runs function C1.GetDatabaseTableTriggerSetup() only once per table to check if it needs to run the other triggers or not. This function is executed the first time an insert/Modify/Delete/Rename is run.

Put a breakpoint in codeunit 423:”Change Log Management”.LogModification (or LogInsertion).

Let the debugger get to the function.

Put a watch on “RecRef.Fields.YourField” and “xRecRef.Fields.YourField”. When the break stops there, you can easily see the old and new values of the field and decide if it is the change you were looking for. In the call stack you can see what objects and functions have been called and where the value was changed.

You might also put some code in the function to put the record in a variable and then put a conditional break on it but this means you have to change the code which I prefer to avoid.