Tag Archives: NAV

Your license will expire in N days

You probably all have seen this when opening the classic/development environment.

And you probably all know how to fix it.

But last time I got this, I kept having the message. Almost tore my hair out (luckily I can’t)!

I had uploaded the license and closed and reopened the development environment but the message kept coming back. When I checked the license (Tools=>License), I saw that it was the good license with an expiry date next year. What is going on?

<RANTING>

And no, I wasn’t getting confused by the American date format. I know the European dateformat is bad. But the American is even worse. Couldn’t we all agree to the more logical format of YYYY/MM/DD (I don’t care about the separator)? After all, we write “one hundred twenty three” as 123 and not as 321 (European date format) or even worse 231 (American date format).

</RANTING>

After checking again and again, closing development environment and stopping all NAV services on the database I was using and then restarting them, I still got that message….

So, I was thinking: it seems NAV decides to show the message BEFORE getting the license from the master-db.

But why and from where?

Hey. Wait a minute! NAV wouldn’t still take the license-file from the subdir where the finsql.exe resides? Would it?

Checking it out and I found it still did!

Someone (don’t ask me why!) had renamed the cronus.flf in the subdir and put our development license in it as fin.flf. And that license was close to the expiry date. I renamed the cronus.flf back to cronus.flf and reopened the client (just checking…). Still the message. So when opening the development environment, NAV searches first for fin.flf and then for cronus.flf. Removing them both blocks the development environment from opening (cronus.flf not found!). Doh!

Putting back the cronus.flf back fixed the problem completely.

Now, this logic was ok when the native database still existed. After all, you could open it without a database server.

For the clean team:

But now that the native database is dead, why should NAV still check if there is a license in the finsql.exe subdirectory? If there is none, just open the development environment.

Field “Version List” in the Object-Table

Starting with NAV2013, Microsoft increased the size of the “Version List” field after having been hammered for years to increase it but this is not the cause of this topic.

I wanted to check how long the field was (bad memory!) so I checked the NAV virtual Field-table and found something strange (actually 2 things).

First the size was not 250 chars but only 248. Why? I don’t know. Well, 2 chars on 250 won’t make the difference anyway.

BUT what drew my attention was the data type of the field: It wasn’t “Text” like I expected but “OemText”!

I never saw this datatype (ahum … http://dynamicsuser.net/nav/b/mark_brummel/posts/nav-2013-unicode-oemcode-amp-oemtexT … Remember the bad memory!). So I checked if I could create a field in a new table using this datatype but I couldn’t. It didn’t show up in the list.

Good. Why do they use “OemText” and not “Text”. What is the difference. There must be one. So check the table in SSMS to see if it shows up.

What are the differences between a “OemText” and “Text” in SQL Server?

Well, we know that starting with NAV2013, text-fields are Unicode. That means the nvarchar-datatype in SQL.

But “OemText” is NOT using nvarchar but varchar in SQL! So the versionlist is NOT Unicode!

Oh. Also fields “Company Name”,”Name”,”Locked By” in the Object-table are “OemText”.

Delete a User-record deletes “User Property” and “Access Control” without using C/AL triggers

… and inserting 2000000120:”User”-record will create a 2000000121:”User Property”-record.

I started with the moral of the story. Now I will tell the story.

I had to write an export/import to export/import user info. Exporting went fine but I had problems importing the data.

The “User Security ID” as a primary key instead of the “Windows Security ID” was NOT the problem! It is only annoying not being able to use the primary key to connect the tables, that makes it more fun.

The problem was the error I got on the T2000000121:”User Property” table. The record already existed.

First I thought I was so ‘lucky’ to hit a non-unique GUID (1 chance in a few 1.000.000.000, so I was already thinking to try playing the lotto [chances are way higher to win with the lotto than hitting a non-unique GUID!]).

Checking that out, I noticed it was not the case and I noticed that records where created in 2000000121:”User Property”! But I wasn’t even using INSERT(TRUE)….. And there is no code in the triggers of table 2000000120:”User” either….

That means that some code is hidden and unreachable behind the table-triggers of table 2000000120:”User” that keeps tables 2000000121:”User Property” and 2000000053:”Access Control” up to date.

Record parameter in a codeunit is NOT mandatory

I was busy writing some code for a blogpost (not this one) and while testing it, I ran the wrong codeunit and got an error I didn’t expect. Then I noticed that the codeunit I ran was the wrong one.

Immediately after that it hit me that the error was really weird because it was generated by a statement in my C/AL code that expected a value. That value should have been passed by the record-parameter of the codeunit.

Running it immediately from the developer environment I don’t pass any parameter, so I should have received this error (like I have seen so many times I my life (generated this error on NAV2009R2 classic with a codeunit named “Do I need a record-parameter?” 🙂 ) ):

---------------------------

Microsoft Dynamics NAV Classic

—————————

The codeunit Do I need a record-parameter? must be called with a Item Ledger Entry record.

—————————

OK

—————————

 

So I tried the debugger just to be sure, and indeed the code was running until it hit the error caused by the missing parameter.

So back to NAV2009R2 to be sure of the error it should generate. Indeed I got the error.

Back to NAV2013R2. Ran the codeunit that receives a parameter and it didn’t give an error but just ran the code (just a MESSAGE(‘Ok’); in it…). But it ran!

Tried also on NAV2016 and also there it ran without error.

Tried also on NAV2013 and also there it ran without error.

Didn’t test NAV2015 but I suppose it will work there too.

 

Last test to be sure: checking my bedroom to see if I wasn’t still sleeping and dreaming. Nop. I am awake!

 

Conclusion:

Starting with NAV2013, a codeunit that has a parameter in the OnRun-method can also be run WITHOUT giving the parameter! The parameter-record will be empty when entering the OnRun-method.

Is it still useful to buffer records in a temptable?

Maybe some of you remember my how-to increase performance by reducing db reads.

I wrote it in 2007 when we still had the native DB and also the SQL DB but NOT the service tier with all its extra’s for performance.

Basically it improves the reading speed of this kind of code:

SalesLine.SETRANGE(Type,SalesLine.Type::Item);

IF SalesLine.FINDSET THEN

REPEAT

Item.GET(SalesLine.”No.”);

……

UNTIL SalesLine.NEXT = 0;

Meaning for each SalesLine-record you go back to the DB and ask it again for the item even if you already received that same item thousands of time. You should NOT harass the database like that with those puny requests. He has better things to do!

But now in 2016 with NAV2016 out in the wild (I should have tested this when NAV2013 came out but …….… No comment please……), I was wondering if my trick was still valid or not.

Considering that now all code is run on the service tier instead of on the client and the service tier has its own cache shared between all its connected clients, I thought that it might have become useless. Maybe it even hinders performance because of the extra overhead. So (finally) time to put it to the test.

So I created a codeunit to run different tests and run it multiple times. The following is a typical output (for my machine and data [not a CRONUS DB]):

Reread worst-case:0.651 seconds

Reread random-case:0.634 seconds

Reread best-case:0.589 seconds

Construct-TT worst-case:4.857 seconds

Construct-TT random-case:3.108 seconds

Construct-TT best-case:0.317 seconds

-“Reread” means I always do a ItemREC.GET() even if the record-variable has already the correct item record.

-“Construct-TT” means I have a function GetItem where I put some code that constructs a temptable on the fly with the items it had to get from the database so that the next read it I could just get it from the temptable (or in the best case I already have it in the variable).

-“Worst-case” means that every time the item-record I have last read is different from the one I need now.

-“Best-Case” means that most of the time the item-record I have last read is the same as the one I need now.

-“Random-case” means I do some pseudo-random access with a high number of cases that the last read item-record is the same as the one I need now.

So what does these numbers tell us?

In the reread-scenario’s it seems that the performance doesn’t change a lot. This is what I expected because once a record has been read, a reread does not result in a query to SQL Server but the record gets taken from the service tier cache (or maybe the C# code behind it sees that it already is in the variable and does not even get it from the cache. If this is not the case, it might be an extra optimization for Microsoft to implement). So this is good!

In the Construct-TT scenario’s we see big differences. In the worst-case and random-case now perform (a lot) worse than on the classic client. The best-case is better than the reread-scenarios but only by half. The “Construct-TT worst-case” and the “Construct-TT random-case” perform a lot worse than the reread=scenario’s.

 

CONCLUSION: While looping your sales lines, just do a GET of your item for each sales line. The only improvement you might introduce is { IF ItemREC.”No.” <> SalesLineREC.”No.” THEN ItemREC.GET(SalesLineREC.”No.”). } You might get some better performance than with just a { ItemREC.GET(SalesLineREC.”No.”) }. I didn’t check it but I don’t think it will be that much.

Debug a needle in a haystack

You probably know the problem. You have to find where some field is updated in the application and you haven’t the slightest idea where the crime is happening. This can easily happen with big add-ons that you don’t know and if things really go bad, there might be multiple places where the table (and the incriminating field) is updated but you don’t know where and there are also a lot of places where the table and field are referenced so searching for “Record xxxx” is long.

My old trick was to rename the field to something that was definitely unique in the database like “Incriminating Field”. Export all objects as text and search for it. This could get quite long especially if you have to filter out objects for which you don’t have read-permissions. You get the error only when NAV tries to export it. Generally after exporting all the rest.

Now starting with NAV2013, there is another option using the debugger in a smart way. No, it doesn’t mean doing step-by-step debugging until you find the offending code.

This is the way I do it:

First enable change log and enable the logging for insert/modify of the incriminating table in which the incriminating field is hiding and do it ONLY FOR THAT TABLE!

Restart the client sessions to enable the logging (NAV runs function C1.GetDatabaseTableTriggerSetup() only once per table to check if it needs to run the other triggers or not. This function is executed the first time an insert/Modify/Delete/Rename is run.

Put a breakpoint in codeunit 423:”Change Log Management”.LogModification (or LogInsertion).

Let the debugger get to the function.

Put a watch on “RecRef.Fields.YourField” and “xRecRef.Fields.YourField”. When the break stops there, you can easily see the old and new values of the field and decide if it is the change you were looking for. In the call stack you can see what objects and functions have been called and where the value was changed.

You might also put some code in the function to put the record in a variable and then put a conditional break on it but this means you have to change the code which I prefer to avoid.

Really big temptable in PRE-NAV2013 and NAV2013+

I have to admit I had some doubts about the title. I was first thinking about “An attempt was made to move the file pointer before the beginning of the file.” but that would exclude a little bit also referring to NAV2013+ (I explain this in the second part of this blogpost). And I was more interested in what would happen in NAV2013+ with a 64 bit servicetier created from scratch.

So this blogpost has to subtopics in it: One talking about PRE-NAV2013 and another about NAV2013+.

  • PRE-NAV2013 (tested with NAV2009R2)

If you have a really big temptable, at a certain point you might hit this error:

—————————

Microsoft Dynamics NAV Classic

—————————

The operating system returned the error (131):

An attempt was made to move the file pointer before the beginning of the file.

 

Why? What does this mean?

First and for all. A temptable starts in memory and when it gets a bit bigger, it is written to a file. How big? I don’t know, but this is not important here.

PRE-NAV2013 is using a 4 byte signed integer to keep track the size of the file. Why not a biginteger? That is an inheritance of the old FAT (https://en.wikipedia.org/wiki/File_Allocation_Table) and DOS (https://en.wikipedia.org/wiki/DOS). Basically, a file was limited to max 2GB of size. And a 4 byte integer goes from −2 147 483 648 to 2 147 483 647, so a perfect match.

But with the later FAT implementations or NTFS (https://en.wikipedia.org/wiki/NTFS), we don’t have that 2G file limit anymore. But it is still there in PRE-NAV2013.

So when NAV is writing to the file, at a certain point it gets over the 2GB size and the integer used to track the size becomes negative, generating the above error (and crashing the native client, but that is not important here [well… kind of…]).

To create the problem, I have created a table with an integer as primary key and 4 Text250-fields. The following code fills up the temptable until it hits the error. The same code I also used in NAV2013+.

This is the code I used to fill up the temptable. I put the code in a codeunit.

dia.OPEN(‘#1########’);

t99915.t1 := PADSTR(”,250,’x’);

t99915.t2 := t99915.t1;

t99915.t3 := t99915.t1;

t99915.t4 := t99915.t1;

FOR i := 1 TO 5000000 DO BEGIN

IF i MOD 1000 = 0 THEN

dia.UPDATE(1,i);

t99915.int := i;

t99915.INSERT(FALSE);

END;

dia.CLOSE;

 

How to avoid this error? There are a few things you can try:

-If you are using a standard table, you can create your own table tailored to your needs. It is not necessary to put it in the license if you only use the table only as a temptable. Put only the fields in it that you need and only the sizes you need and limit SIFT-fields and extra keys as much as possible.

-Divide the records in the temptable over multiple temptables (Arrays on a temptable do not help because it points to the same temptable and not to different ones). Each temptable has its own file in the operating system, so you have multiple times 2 GB of data.

This was tested on a classic client and not on the 2009R2 RTC but I think it has the same problem.

  • NAV2013+

Does NAV2013+ still have this problem? The short answer is no. And I could finish the blogpost now, but I want to point out some details on what is going on.

-First good surprise is that it is a lot faster than with the older versions. This is because it does not write the data to disk, but keeps it in memory.

-Next good thing is that I don’t get the error anymore like I expected because it is all in memory now.

-But because everything is in memory, the service tier is gobbling up your precious memory. So you might run into out-of-memory issues. But on the positive side I did notice that the memory structure of a NAV2013+ temptable is more memory efficient than the file structure of a PRE-NAV2013 temptable. Meaning that a 2 GB file-structure in PRE-NAV2013 does NOT take 2 GB of memory in NAV2013+. I didn’t measure it scientifically but my guess is that it takes around 600MB to 1000 MB of memory instead of 2GB.

One other interesting observation I found is this:

Running the codeunit multiple times, memory increases to a few GB, but suddenly it releases memory to get under 1 GB. Looks like garbage collection kicking in.