Tuesday, 27 November 2012

Outlook data file lock & Lync

In brief: Running Lync with administrator right will lock Outlook data file (the *.ost file that sync with Exchange, not the *.pst).

In detail
On a rare occasion, I found that whenever I start up Outlook 2013, I hit the error message below and it prevent the Outlook from starting up.

Outlook Data File
The file C:\Users\<UserId>\AppData\Local\Microsoft\Outlook\user.name@domain.com.ost is in use and cannot be accessed. Close any application that is using this file, and then try again. You might need to restart your computer.

It happens after I done archiving email, closed Outlook to make a backup of the archived PST file to a backup location. Last time when I hit the error (not on the *.ost file, but on the *.pst file) it was due to the Windows Search services is locking up the *.pst file for indexing. Stopping that service will resolve it, just need to start up the service again after Outlook is started.

But this time, I tried the same method, it doesn't work. I then recalled Lync is storing message history in Outlook, so it could be the locking source. I closed Lync and start up Outlook again, the error doesn't occur, resolved.

Out of curiosity, I was wondering why would Lync lock it that way? Then another thought came through, the "Run as administrator". I run the Lync with administrator right due to I need to do a screen sharing of another program that's running under administrator right.

I done a few test, running Lync normally won't lock it, and running Lync as administrator will lock it.

So if the machine has both Outlook and Lync, make sure:

  1. If Lync doesn't needs administrator right, Outlook and Lync start up sequence doesn't matter
  2. If Lync needs administrator right, start up Outlook before Lync

Monday, 26 November 2012

AX2009 report showing square block for double-byte language (Eg. Chinese, Korean, or Japanese characters)

When double-byte characters (Eg. Chinese, Korean, or Japanese characters) is used on AX2009 report, if they're showing up as square block instead of the proper characters, the fix is to change the font under the user options.

1. Options > Font (tab)
2. Change the Font under the "Report" group to "Arial Unicode MS"

Friday, 16 November 2012

Misleading "The record already exists" message

The error message "The record already exists" is quite common in AX especially during development.
Point 1 - 3 is quite common, the reason for this post was Point 4, which is new to me and we just hit that error yesterday.

These are the few area/direction that we usually start investigating:

  1. Unique index
    This is the most common reason, one of the unique index field has duplicated value.
    This can easily be checked by browsing the record with table browser.
  2. Uncommitted record
    If the error happen during a process, it is possible that the record has been inserted once, but due to it hasn't been commited yet, if you browse the data with table browser in another AX session, you won't see it.

    One of the method is to select the data from SQL with the option

    This has been quite useful during debugging large process. Eg. Inventory posting, movement, etc.
  3. Duplicated RedId
    This usually won't happen, but during development with a team of developers, you might not know if someone has done something directly from SQL (Eg. moving records between environment, DTC from one place to another)

    After AOS is started up, when the first time it insert record into a table, it will first retrieve the value from SystemSequences.NextVal, then update it by add 250 to it. So the AOS cached 250 RecId for this table and use it for subsequent insert. When this cached RecId is finished, it will then read and do the same thing again.

    This duplicated RecId happens when the SystemSequences.NextVal is lower than the current max(RecId) of the table where the record is going to be inserted (target table). If there're gap between the RecId, the error might not occur immediately, but eventually it will.

    SystemSequences.NextVal = 5678900001
    AOS cached the RecId between 5678900001 - 5678900250, then updated SystemSequences.NextVal = 5678900251
    The max(RecId) in the target table is 5678900747

    If the RecId in the target table doesn't have gap, it will hit error immediately when it try to insert (5678900001, 5678900002, 5678900003, and so on from the 250 allocated/cached RecId).
    If there're gap, depends on the RecId allocated from AOS and the existing RecId in target table, it eventually will hit the error.

    But the error message in AX will be misleading when the error occur.
    AX tends to show the record already exist with the value from TitleField1 and TitleField2.

    To fix this, update the SystemSequences.NextVal = Target table's max(RecId) + 1, then restart the AOS. Beware of multi-AOS environment, each AOS cache their own set of 250 RecId, you'll have to restart all AOS.

    *CAUTION: Only update the SystemSequences record where the field TABID = <Table Id>. Only one record.
  4. Default value in table definition has been altered
    Unfortunately I can't replicate the error message, but I'll try to explain what we hit yesterday.

    We got an error message of "The record already exists", so the first thing is to check the above 3 reasons, but none of that is the main cause.

    We then check the Event log and seen an error message complaining cannot insert NULL value into CREATEDDATETIME field. So we got an error message in AX that's different than the one showing up in Event log.

    Object Server 02: The database reported (session 3 (PChan)): [Microsoft][SQL Native Client][SQL Server]Cannot insert the value NULL into column 'CREATEDDATETIME', table '<dbname .dbo.tablename=".dbo.tablename">'; column does not allow nulls. INSERT fails.

    When the "CreatedDateTime" property of the table in AX is set to Yes, AX automatically create the field in SQL with a default value definition. But somehow, a mystery person tampered with the table (maybe by copy table between environment directly through SQL) and forgotten this, the default value definition is missing and that has cause the error. The screenshot below, on the left, is the tempared table where the default definition is missing, on the right is the correct version.

Monday, 12 November 2012

Read unicode CSV file with thousand separator or comma in text value

There're a few classes available in Dynamics AX for reading text or CSV file.
When Unicode and value with comma as text is involved, choosing the correct class is important.

Below are some of the classes and their hierarchy.

Class name      Class declaration
==============  =================================
IO              class Io          extends Object
+- CommaIo      class CommaIo     extends Io
|  +- TextIo    class TextIo      extends CommaIo
|  +  Comma7Io  class Comma7Io    extends CommaIo
|  +- AsciiIo   class AsciiIo     extends CommaIo
+- CommaTextIo  class CommaTextIo extends Io

Given a CSV file with the following line:

It contain both Unicode text (Eg. 외환은행) and value with comma as value (Eg. 2,350).
The 2,350 needs to be read as 2350 instead of 2 and 350.
The screenshot below shows how each class read the line.

CommaTextIo correctly read Unicode text and amount with thousand separator

CommaIo correctly read the amount with thousand separator but not the Unicode text

TextIo correctly read Unicode text but not the amount with thousand separator (The amount supposed to be 2350 not 2 and 350)

AsciiIo incorrectly read both Unicode text and amount with thousand separator

Below is a sample code used to read the CSV file, the screenshots above are captured by running the job below for each of the class mentioned (CommaTextIo, CommaIo, TextIo, & AsciiIo).

static void TestReadCSV(Args _args)
    CommaTextIo        commaTextIo;
    container          lineCon;
    commaTextIo = new CommaTextIO(@'C:\UnicodeSample.csv', #io_read);
    lineCon = commaTextIo.read();
    while(lineCon && (commaTextIo.status() == IO_Status::Ok))
        info(conPeek(lineCon, 4));
        lineCon = commaTextIo.read();

Monday, 5 November 2012

Invalid use of the DataAreaId field in a cross-company query

Did a search online and doesn't find much info about this error - "Invalid use of the DataAreaId field in a cross-company query", it might be worth to share what has been found and the workaround.

During the development today, I hit this error, not sure why it doesn't allow the use of DataAreaId in the query criteria on crossCompany query, but there's a quick workaround for this - join it to DataArea table.

The query above had a compile error at 'tableA.dataAreaId' with an error message of "Invalid use of the DataAreaId field in a cross-company query". To workaround this error, joining it to DataArea table would fix it.

Instead of explaining in plain text, it is easier to understand by seeing the code and result.
Below are the sample data and the query to help further elaborate it.

---------- ---------- ---------- ---------- ----------
ceu        CEU_R1_F1  CEU_R1_F2  CEU_R1_F3  CEU_R1_F4
ceu        CEU_R2_F1  CEU_R2_F2  CEU_R2_F3  CEU_R2_F4
con        CON_R1_F1  CON_R1_F2  CON_R1_F3  CON_R1_F4
con        CON_R2_F1  CON_R2_F2  CON_R2_F3  CON_R2_F4
dat        DAT_R1_F1  DAT_R1_F2  DAT_R1_F3  DAT_R1_F4
dat        DAT_R2_F1  DAT_R2_F2  DAT_R2_F3  DAT_R2_F4

---------- ---------- ---------- ---------- ----------
ceu        F1         F2         F3         F4
con        F1         F2         F3         F4
dat        F1         F2         F3         F4

Work around by joining to DataArea table
while select crossCompany tableA
        join dataArea
       where dataArea.id       == tableA.dataAreaId
        join tableB
       where tableB.dataAreaId == dataArea.id
    info(strFmt("%1 - %2 - %3", tableA.dataAreaId,

ceu - CEU_R1_F1 - ceu
ceu - CEU_R2_F1 - ceu
con - CON_R1_F1 - con
con - CON_R2_F1 - con
dat - DAT_R1_F1 - dat
dat - DAT_R2_F1 - dat

Incorrect join
Although TableB has 1 record per company, but joining with crossCompany keyword will result in joining without DataAreaId and cause incorrect query result.

while select crossCompany tableA
        join tableB
    info(strFmt("%1 - %2 - %3", tableA.dataAreaId,

ceu - CEU_R1_F1 - ceu
ceu - CEU_R2_F1 - ceu
con - CON_R1_F1 - ceu
con - CON_R2_F1 - ceu
dat - DAT_R1_F1 - ceu
dat - DAT_R2_F1 - ceu
ceu - CEU_R1_F1 - con
ceu - CEU_R2_F1 - con
con - CON_R1_F1 - con
con - CON_R2_F1 - con
dat - DAT_R1_F1 - con
dat - DAT_R2_F1 - con
ceu - CEU_R1_F1 - dat
ceu - CEU_R2_F1 - dat
con - CON_R1_F1 - dat
con - CON_R2_F1 - dat
dat - DAT_R1_F1 - dat
dat - DAT_R2_F1 - dat

Monday, 29 October 2012

Dynamics AX 4 report - Unable to load page

One of the very old question pop up again recently - the "Unable to load page" on AX4 report.
The last I know, it need a hotfix on the Dynamics AX client (AX32.exe).

I had a quick check on local instance with different client build (4.0.2501.116 and 4.0.2503.998), one has the error and the other doesn't.

  • App: 4.0.2501.116 with Client: 4.0.2501.116 => "Unable to load page" error on report
  • App: 4.0.2501.116 with Client: 4.0.2503.998 => No error on report

Wednesday, 24 October 2012

Import with ID values will retain ID from other layer

More details on layer & ID

Any object created in VAR layer usually has an ID range between 30001 - 40000.
Eg. 30001, 30002, 30003 ... etc.

This is true if the object is CREATED from that layer, but not if you created it by IMPORT with ID.

Let's say, you created a field in USR layer, export with ID, delete from USR layer, then import with ID into VAR layer. When the new field is imported into VAR layer with ID, it doesn't create a new ID, but it do as the option says - Import the ID, so, you'll get USR ID in VAR layer.

Eg (screenshots below):

  1. A new field 'ReadyToPost' is created in USR layer at CustInvoiceTable
  2. Export the table, then delete if from USR layer
  3. Login to VAR layer and import with ID
  4. You'll then see the new field 'ReadyToPost' created in VAR layer, but having USR layer ID.

Option - Import with ID
USR layer ID retained - 50002

Dynamics AX services login & SQL access/security risk

Dynamics AX AOS service login require only a few access and this would prevent developer doing some funny thing around the database. But sometimes when DBA doesn't give a throughout thought on it, it might cause security issue.

DBA is requested to look at an AOS start up issue, rather than checking and fixing it properly, a sysadmin role is granted to the AOS service login as a quick fix. Sometimes under urgency or pressure, this might be a quick fix, but it would cause security risk issue. Although this role is granted only to the AOS service login, but developer with AOT access can issue a direct SQL statement to grant themselves the sysadmin role now.

Running the job below would grant the sysadmin role.

static void UpdateSQL(Args _args)
    str                             sql;
    Connection                      connection = new Connection();
    SqlStatementExecutePermission   sqlPerm;
    Statement                       statement;
    int                             rowsAffected;

        sql     = "EXEC master..sp_addsrvrolemember @loginame = N'Domain\\Username', @rolename = N'sysadmin'";
        sqlPerm = new SqlStatementExecutePermission(sql);

        statement    = connection.createStatement();

        rowsAffected = statement.executeUpdate(sql);


Friday, 12 October 2012

Clone BitLocker partition on Windows 7 with Macrium Reflect

Received my SSD this afternoon and cloned the HDD partition over. It took few hours to complete the clone but it is well worth it, my old laptop feels like a new one now.

Before purchase the SSD, I've seen quite a number of post regarding the concern and issue on cloning BitLocker partition on Windows 7. It seems like a lot of people hitting error or unable to boot up after the cloning. I'm not getting into details about that, but just to share my experience on replacing my HDD with SSD.

I bought the Samsung SSD 830 256GB (SATA III) from Amazon.co.uk, free shipping and arrived in just a few days time. The package has a SATA to USB adaptor, which is quite useful to me (I don't have a caddy for secondary HDD in my laptop). It also comes with Norton Ghost 15 full version, but I didn't use it. For simplicity reason, I used Macrium Reflect. Below are the steps taken.

*I'm using Windows 7 with BitLocker enabled for OS partition
  1. Boot up OS as usual, install Macrium Reflect
  2. Plug in the SSD to USB port (with the supplied SATA to USB cable)
  3. Run the Macrium Reflect software and clone the partitions from HDD to SSD
    Cloning all 3 partitions at one go:
    > the reserved partition (~180MB)
    > the recovery partition (~2GB)
    > the OS partition (the remaining of my drive size)
    *Just Google for the instructions or video in YouTube
  4. Let it run for few hours
  5. After the cloning is completed, shut down laptop
  6. Took out battery, swap the HDD with the new SSD, put the battery back in
  7. Boot up OS as usual
  8. Turn on BitLocker again
No issue during the whole process, it all gone well.

Tuesday, 9 October 2012

Trick to add included column in AX2009

Ref. and more details at: http://daxdilip.blogspot.com/2011/05/tip-how-to-avoid-overriding-of-sql.html

In brief, included column is not supported in AX2009, but is now supported in AX2012.
To work around that in AX2009, it is to write the included column SQL statement and fire it up at the end of the AX synchronization (\Classes\Application\dbSynchronize).

You'll need to put some thought on the pro and cons before implement it, a proper control procedure should be put in place due to other developer from your project team might not aware of this extra SQL script.

- Included column available in AX2009

- Maintenance of the index
- Overhead during synchronization (index will first be dropped then recreated)

- New index created outside of AX (directly into the table in SQL) will be dropped during synchronization
- Included column added directly from SQL into existing AX table's index will be dropped only if you made changes to that index in AX AOT node

Alternatively, Advanced Storage Management seems to offer this feature, I've never try it before, but thought just mention it here for anyone who looking for options.
URL: http://www.aximprove.co.uk/product_AdvancedStorageManagement.html

Monday, 1 October 2012

try...catch... block not executed in transaction pair (ttsbegin/ttscommit)

One of the common habit to start developing on class that extends RunBaseBatch is by duplicating the Tutorial_RunBaseBatch class. The run method has a ttsbegin & ttscommit, developer usually put codes within that section, this normally works OK. But sometimes when this class is calling a lot of other class to do some job and that class has a well developed error handling, this outer ttsbegin & ttscommit might break it.

One of the case I seen recently is the use of SalesFormLetter.
The SalesFormLetter class has its own error handling, if it hit an error for current record (Eg. Sales order), it will handle the error and then move on with the next record. But this error handling is broken by another batch job development (a pair of outer ttsbegin & ttscommit).

Below are the details:

  • Developer duplicate the Tutorial_RunBaseBatch class
  • Modify it for a batch job development by putting them in .update() method and this method is wrapped by a pair of ttsbegin & ttscommit
  • This development perform some business logic and at the of of the process, calling SalesFormLetter class to do Picking list (Eg. Picking list of many different sales order)
  • A few orders processed, pick list printed
  • After several orders are posted, it reached an order that cause error
  • The error jump out to the inner most try...catch... block that's outside of the ttsbegin & ttscommit, this cause the try...catch... block inside SalesFormLetter not catching it, hence, not moving on to next order and posting stopped
  • All previously posted picking list is rolled back
  • Warehouse guy try to pick the item but picking list not found in the system (due to rolled back)

This issue happen because any error occur within a ttsbegin & ttscommit pair will not be catch by the try...catch... block within the transaction pair, it will jump out to the inner most try...catch block outside of the transaction pair.

Refer to this link for more details: http://msdn.microsoft.com/en-us/library/aa893385(v=ax.50).aspx

Example below shows an error occur in SalesFormLetter will not be catch by its try...catch... block, instead, it will jump out to the try...catch... block at the .run() method.

//Code modified to make this easier to read, but should
//roughly explains what this article is trying to achieve

public void run()

public void update()
    SalesFormLetter salesFormLetter;

    //Eg. Some business logic for batch job

    //Eg. Post a few picking list at the end
    salesFormLetter = SalesFormLetter_PickingList::construct(true);
salesFormLetter.updatePrinterSettingsFormLetter(..., ...);

When developing error handling, it is important to consider the other classes that will be called and investigate further whether it will break it or develop a mechanism to handle it properly.

Sunday, 30 September 2012

Query Active Directory without administration pack

If the server you logged-on to doesn't have the administration tool/snap-in for Active Directory, an alternative is to use the standard query window. But of course it won't be as convenient as the administration tool itself and the interface is quite limited.

Run this command: rundll32 dsquery,OpenQueryWindow

You can then try search the AD and find the info you need.

You wanted to find all users belong a particular group but you're unsure of the group name, you just roughly remember the name. Here're what you can do:

  1. List out the group name in the domain
    (Run this at the command prompt: 
  2. Find the group name you looking for
  3. Use the query window above to find that group
  4. Double click on the result and it will shows you the list of member in the group

Wednesday, 22 August 2012

Dynamics AX "Find" with regular expression

Quite often the "Find" feature in Dynamics AX is used to find plain text.
Eg. enter 'salesLine' in the Containing text field and click Find now.

But it can do more than that, it can match word with regular expression as well (but the search is slower).

Eg. Find in the 'SalesFormLetter' class 'run' method for any method containing 'salesFormLetter' at the front and 'Thread' at the end.
?* indicates occurrence of any characters for zero or more times

Eg. Find in the 'SalesFormLetter' class for the word '::After'

Eg. Find in the 'SalesFormLetter' class for the word 'proforma' followed by a space

Eg. Find in the 'SalesFormLetter' class for the word 'proforma' preceded by a space

Regular expression syntax in Dynamics AX: http://msdn.microsoft.com/en-us/library/aa886279(v=ax.50).aspx

Wednesday, 18 July 2012

Dynamics AX buf2Buf and buf2BufByName

One of the useful method from standard AX is the buf2Buf(), it behave similar to table's .data() method with one difference - buf2buf doesn't copy system field. Another reason of using buf2buf is copying of record from one company to another company. When using changeCompany(), the .data() method copy all field including DataAreaId and when insert(), DataAreaId field does not change, hence, the record is not inserted into the company indicated in the changeCompany(), here is where buf2Buf() comes in handy - copy all field except system field, then during insert(), let the system assign values to system field.

Sometimes, there're some functionality requires copy data from one table to another table which has similar structure (Eg. to history or logging table), in this case, the .data() and buf2Buf() cannot be used. But we can make some modification to the buf2Buf() method to copy based on field name instead of field Id.

Below are the two methods:
> buf2Buf() - Standard AX method
> buf2BufByName() - modified method

//Standard AX method, copy data based on field Id

static void buf2Buf(Common  _from, Common  _to)
    DictTable   dictTable = new DictTable(_from.TableId);
    fieldId     fieldId   = dictTable.fieldNext(0);

    while (fieldId && ! isSysId(fieldId))
        _to.(fieldId)   = _from.(fieldId);
        fieldId         = dictTable.fieldNext(fieldId);

//Modified method, copy data from one table to another table with similar structure

static void buf2BufByName(Common  _from, Common  _to)
    DictTable   dictTableFrom   = new DictTable(_from.TableId);
    DictTable   dictTableTo     = new DictTable(_to.TableId);
    DictField   dictFieldFrom;
    FieldId     fieldIdFrom     = dictTableFrom.fieldNext(0);
    FieldId     fieldIdTo

    while (fieldIdFrom && ! isSysId(fieldIdFrom))
        dictFieldFrom   = new DictField(_from.TableId, fieldIdFrom);

            fieldIdTo = dictTableTo.fieldName2Id(dictFieldFrom.name());

                _to.(fieldIdTo) = _from.(fieldIdFrom);

        fieldIdFrom = dictTableFrom.fieldNext(fieldIdFrom);

Friday, 13 July 2012


If you're looking for a way to view all the embedded image resources in Dynamics AX form, it is available at \Forms\SysImageResources.

Tuesday, 26 June 2012

Method to stop AOS when it stuck at "stopping" at Services

Sometimes when you stop AOS, it stuck at the “Stopping” state (Eg. like, forever), you can get it stopped by killing the Ax32Serv.exe.
This way, you don’t need to restart the whole server.

1. Open Event Viewer
2. Filter with event Id “149”
3. Find the PID of the AOS you wanted to stop

4. Open Task Manager
5. Select the Ax32Serv.exe which has the same PID number as you found in Step #3

*If the “PID” column is not shown, just click on View > Select columns > check the PID > Click “OK”

6. End task for this program
7. Refresh the “Services” and you’ll see it stopped now

Sunday, 17 June 2012

Dynamics AX regular expression - match()

Regular expression is a powerful tool for matching a text pattern, but it is not often seen in AX.

I've seen a developer coded a method today to validate a text string, return true if the text contains digit only (0-9), return false if either one of the character in the text contain non-digit, the method is ~6 lines long, find and loop through each character.

There's an easier way to achieve the same result - use of Regular expression.
The method can be simplified to 1 line.

This change, saved 0.31ms (from the trace captured), although the number looks small, but when this is executed millions time, it does cost a lot.

In the sample file I received, it imports around 20 million lines, given the 0.31ms saving, that reduce the execution time by:

> Time saved in ms x 20m lines = 6,200,000ms

> Time saved in seconds = 6,200 seconds
> Time saved in minutes = 103 minutes
> Time saved in hours = 1.72 hours

Using regular expression ===============================
//Return TRUE if '_text' contain digit only, otherwise return FALSE
boolean validateTextMatch(str _text)
    return !match("[^0-9]", _text);

Using strFind() & subStr() in loop =========================
boolean  validateTextLoop(str _text)
    int     counter, textLen = strlen(_text);
    Phone   validNumbers = '0123456789';

    for(counter = 1; counter <= textLen; counter ++)
        if(!strfind(substr(_text,counter,1), validNumbers, 1, 10))
            return false;
    return true;

Screenshot showing the trace time of both methods

Dynamics AX - skipDataMethods & set based operation

Sometimes when importing large number of records (Eg. millions of lines), the quickest way is to use:
  • insert_recordset
  • update_recordset
  • delete_from

But these set base operation will roll back to row-by-row operation when either of the following condition is true:
  • it is not an SQL table (Eg. temporary table)
  • Database log is enabled for the table
  • Alert is setup for this table
  • Record level security is enabled for the table
  • AOSValidation method is overwritten
  • when using insert_recordset
    • the .insert() method is overwritten
  • when using update_recordset
    • the .update() method is overwritten
  • when using delete_from
    • the .delete() method is overwritten
    • DeleteAction is defined

To prevent it from fallback to row-by-row operation, the following method can be used if:
  • Delete action is defined, use skipDeleteActions
  • Database log is setup, use skipDatabaseLog
  • Alert is setup, use skipEvents
  • Method is overloaded (.insert(), .update, .delete()), use skipDataMethods

Non-SQL table does not support set base operation.

NOTE: Only use these if it is logically correct to skip the row-by-row triggered operation (Eg. Database log, alert, RLS, overloaded method, etc)

Ref: http://msdn.microsoft.com/EN-US/library/aa849875

Tuesday, 29 May 2012

Execute SQL statement directly from Dynamics AX

This is not a recommended approach to run query against AX, but sometimes there's a need for it (Eg. stored procedure), below is the sample code to run SQL statement directly to the AX database (you'll need to do manual security check due to this approach will skip AX security validation).

I usually has a class with 2 simple method:
  • executeUpdate
    Used to execute INSERT, UPDATE, DELETE statement
  • executeQuery
    Used to execute SELECT statement

//Class :: SQLUtility ==========================================
class SQLUtility

//Method :: executeUpdate ======================================
server static int executeUpdate(str sql, Connection connection = new Connection())
    SqlStatementExecutePermission   sqlPerm;
    Statement                       statement;
    int                             rowsAffected;

        sqlPerm = new SqlStatementExecutePermission(sql);

        statement    = connection.createStatement();

        rowsAffected = statement.executeUpdate(sql);


    return rowsAffected;

//Method :: executeQuery =======================================
server static ResultSet executeQuery(str sql, Connection connection = new Connection())
    SqlStatementExecutePermission   sqlPerm;
    Statement                       statement;
    ResultSet                       resultSet;

    sqlPerm = new SqlStatementExecutePermission(sql);

    statement = connection.createStatement();
    resultSet = statement.executeQuery(sql);


    return resultSet;

//Job :: Job_DirectSQLTest =====================================
static void Job_DirectSQLTest(Args _args)
    ResultSet       testRS;
    testRS = SQLUtility::executeQuery("select top 10 ItemId from InventTable");

Result of running the job " Job_DirectSQLTest  "

Wednesday, 16 May 2012

Joining permanent table with temp table

Temporary table in AX4 and AX2009 is file base and only live during the execution session (AX2012 has an additional option to make it as SQL temp table). There's one key difference on query joining permanent table only and query joining the mix of permanent and temp table - The actual SQL statement sent to SQL server and its performance.

Writing code with the assumption it works the same could leads to badly perform code, hence, an understanding of how it works will helps avoid performance issue. This performance issue usually doesn't shows up during development due to small dataset, but after system go live, when table data grows over time, it might start showing up.

Given there're 3 tables:
> TableA: Contain 6 fields where Field1 is primary key
> TableB: Contain 3 fields where Field1 is primary key and foreign key to TableA
> TableC: Contain 1 field where Field1 is primary key and foreign key to TableA

The code below showing the same query, the first one is joining with all 3 table as permanent table and the second query is joining TableA & TableB as permanent table and TableC as temp table.

server static void testQuery()
    PerfTableA      perfTableA;
    PerfTableB      perfTableB;
    PerfTableC      perfTableC, perfTableCTemp;

    //All permanent table ======================================
    select perfTableA
      join perfTableB
     where perfTableB.Field1 == perfTableA.Field1
      join perfTableC
     where perfTableC.Field1 == perfTableA.Field1;

    //Mix of permanent & temp table ============================
    insert_recordset perfTableCTemp(Field1)
    select Field1 from perfTableC;

    select perfTableA
      join perfTableB
     where perfTableB.Field1     == perfTableA.Field1
      join perfTableCTemp
     where perfTableCTemp.Field1 == perfTableA.Field1;

    info("Test completed");

First query will have good performance due to SQL server will first filter TableC, reducing the result that join to TableA and TableC (refers Figure 5: Execution plan).

Actual SQL statement sent to SQL server

   AND ((B.DATAAREAID='nl01') 
   AND  (B.FIELD1=A.FIELD1) ) 
   AND ((C.DATAAREAID='nl01') 

Second query will have bad performance due to TableC is temp table, it is not part of the query sent to SQL Server, hence, SQL Server has to select from TableA & TableB, then returns a large dataset to AX, from AX further join with the temp table (TableC) to achieve the same result (refers Figure 5: Execution plan).

Actual SQL statement sent to SQL server

   AND ((B.DATAAREAID='nl01') 

Trace parser showing the actual query sent to SQL server (first query)

Trace parser showing the actual query sent to SQL server (second query)

The cost of first query (0.056144)

The cost of second query (4.48013)

Execution plan for both first and second query.
Notice the arrow thickness in execution plan on the right (2nd query), the thickness indicates the amount of data returns, the thicker the larger the rows and dataset.

Code used to populate test data

server static void populateData()
    PerfTableA              perfTableA;
    PerfTableB              perfTableB;
    PerfTableC              perfTableC;
    RecordInsertList        recordInsertListA, recordInsertListB, recordInsertListC;

    RandomGenerate          randomGen = new RandomGenerate();
    int                     counter;
    int                     modInt = real2int(#upperLimit/10);
    SysOperationProgress    progress = SysOperationProgress::newGeneral(#aviUpdate, 'Populate data', #upperLimit);

    delete_from perfTableA;
    delete_from perfTableB;
    delete_from perfTableC;

    recordInsertListA = new RecordInsertList(tableNum(perfTableA));
    recordInsertListB = new RecordInsertList(tableNum(perfTableB));
    recordInsertListC = new RecordInsertList(tableNum(perfTableC));

    for(counter = 1; counter <= #upperLimit; counter++)
        //TableA ==========================================
        perfTableA.Field1 = int2str(counter);
        perfTableA.Field2 = int2str(randomGen.randomInt());
        perfTableA.Field3 = int2str(randomGen.randomInt());
        perfTableA.Field4 = int2str(randomGen.randomInt());
        perfTableA.Field5 = int2str(randomGen.randomInt());
        perfTableA.Field6 = int2str(randomGen.randomInt());

        //TableB ==========================================
        perfTableB.Field1 = int2str(counter);
        perfTableB.Field2 = int2str(randomGen.randomInt());
        perfTableB.Field3 = int2str(randomGen.randomInt());

        //TableC ==========================================
        if((counter mod modInt) == 0)
            perfTableC.Field1 = perfTableA.Field1;

        progress.setText(strfmt("Populating: %1 of %2", counter, #upperLimit));


#Added 19/06/2012#
Post related to Trace Parser: