I'm Joris "Interface" de Gruyter. Welcome To My

Code Crib

All Blog Posts - page 3

Page: 3 of 16

Mar 25, 2019 - Repost: Pointing Build Definitions to Specific VMs (agents)

Filed under: #daxmusings #bizapps

Since the AXDEVALM blog has been removed from MSDN, I will repost the agent computer name post here AS-IS, until we can get better official documentation. Original post: October 20, 2017


We’ve recently collaborated with some customers who are upgrading from previous releases of Dynamics 365 to the recent July 2017 application. These customers typically have to support their existing live environment on the older application, but also produce builds on the newer application (with newer platform).

Currently the build agent is not aware of the application version available on the VM. As a result, Visual Studio Team Services (VSTS) will seemingly randomly pick one or the other VM (agent) to run the build on. Obviously this presents a challenge if VSTS compiles your code on the wrong VM - so the wrong version of application and platform. We are reviewing what would be the best way to support version selection, but in the mean time there is an easy way to tie a build definition to a specific VM.

First, in LCS go to your build environment and on the environment details page, find the VM Name of the build machine. In this particular example below, the VM Name is “DevStuffBld-1”.

Next, go to VSTS and find the build definition you wish to change. Note that if you have more than one version you’re building for, you will want more than one build definition - and point each to its respective VM. To make sure a build definition points to a specific VM, edit the build definition and find the Options tab. Under Options you will find a section of parameters called Demands. The demands are effectively either specific values setup on the agent setup in VSTS (you can do this in the Agent Queue settings), and the agent also picks up all environment variables on the VM it runs on. You will notice that all build definitions already check for a variable called DynamicsSDK to be present to ensure the build definition runs only on agents where we have set this “flag” if you will. Since each VM already has an environment variable called COMPUTERNAME, we can add a demand for computername to equal the name of our build VM. So for the example of the build VM from above, we can edit our build definition to add the following demand by clicking +Add:

Save your build definition and from now on your build will always run on the right VM/agent.

  Read more...

Feb 19, 2019 - Repost: Enabling X++ Code Coverage in Visual Studio and Automated Build

Filed under: #daxmusings #bizapps

Since the AXDEVALM blog has been removed from MSDN, I will repost the code coverage blog post here AS-IS (other than wrong capitalization in the XML code), until we can get better official documentation. Note that after this was published, I received a mixed response from developers. For many it worked, for others this did not work at all no matter what they tried… I have not been able to spend more time on investigating why for some people this doesn’t work. Original post: March 28, 2018


To enable code coverage for X++ code in your test automation, a few things have to be setup. Typically, more tweaking is needed since you will likely be using some platform/foundation/appsuite objects and code, and don’t want code coverage to show up for those. Additionally, the X++ compiler generates some extra IL to support certain features, which can be ignored. Unfortunately there is one feature that may throw off your results, we’ll talk about this further down.

One important note: Code Coverage is a feature of Visual Studio Enterprise and is not available in lower SKUs. See this comparison chart under Testing Tools | Code Coverage.

To get started, you can download the sample RunSettings file here: CodeCoverage You will need to update this file to include your own packages (=”modules” in IL terminology). At the top of the file, you will find the following XML:

<ModulePaths>
    <Include>
        <ModulePath>.*MyPackageName.*</ModulePath>
    </Include>
    <Exclude>
        <ModulePath>.*MyPackageNameTest*.*</ModulePath>
    </Exclude>
</ModulePaths>

You will need to replace the “MyPackageName” with the name of your package. You can add multiple lines here and use wildcards, of course. You could add Dynamics.AX.* but that would then include any and all packages under test (including Application Suite, for example). This example also shows how to exclude a package explicitly, for example in this case the test package itself. If you have multiple packages to exclude and include, you would enter it this way:

<ModulePaths>
    <Include>
        <ModulePath>.*MyPackage1.*</ModulePath>
        <ModulePath>.*MyPackage2.*</ModulePath>
    </Include>
    <Exclude>
        <ModulePath>.*MyPackageName1Test*.*</ModulePath>
        <ModulePath>.*MyPackageName2Test*.*</ModulePath>
    </Exclude>
</ModulePaths>

To enable code coverage in Visual Studio, open the Test menu, select Test Settings and Select Test Settings File. Select your settings file. You can then run code coverage from menu Test > Analyze Code Coverage and then selecting All Tests or Selected Tests (this is your selection in the Test Explorer window). You can open the code coverage results and double click any of the lines - which will open the code and highlight the coverage.

To enable code coverage in the automated build, edit your build definition. Click on the Execute Tests task, and find the Run Settings File parameter. If you have a generic run settings file, you can place it in the C:\DynamicsSDK folder on the build VM, and point to it here (full path). Optionally, if you have a settings file specific for certain packages or build definitions, you can be more flexible here. For example, if the run settings file is in source control in the Metadata folder, you can point this argument to “$(Build.SourcesDirectory)\Metadata\MySettings.runsettings”.

The biggest issue with this is the extra IL code that our compiler generates, namely the pre- and post-handler code that is generated. This is placed inside any method, and is thus evaluated by code coverage even though your X++ source doesn’t contain this code. As such most methods will never get 100% coverage. If a method has the [Hookable(false)] attribute (which makes the X++ compiler not add the extra IL code), or if the method actually has pre/post handlers, the coverage will be fine. Note that Chain-of-Command logic that the compiler generates is nicely filtered out.

  Read more...

Jan 18, 2019 - Azure DevOps Release Pipeline

Filed under: #daxmusings #bizapps

Welcome to 2019, the year of the X++ developer!

Today marks a great day with a release of the first Azure DevOps task for D365 FinOps users. Since documentation is still underway, I wanted to append the official blog post with some additional info to help guide you through the setup. The extension can be installed from here: https://marketplace.visualstudio.com/items?itemName=Dyn365FinOps.dynamics365-finops-tools

The LCS Connection

  • if your LCS project is hosted in the EU, you will need to change the “Lifecycle Services API Endpoint”. By default it points to https://lcsapi.lcs.dynamics.com but if you log into LCS and your URL for your project shows “https://eu.lcs.dynamics.com” you will need to change this api URL to also include EU, like so: https://lcsapi.eu.lcs.dynamics.com
  • App registration: I encourage to use the preview setup experience (“App registrations (Preview)”). Add a “new registration” for a native application, I selected “accounts in this organizational directory only (MYAAD)”. In the redirect URI you can put anything for a native application, typically http://localhost and in the preview experience use “Public client (mobile & desktop)” to indicate this is a native application.

Thanks to Marco Scotoni for pointing out that finding the API to give permissions to, just go to the “APIs my organization uses” tab.

The Task

  • Create the new connection using the app registration as described above
  • LCS Project Id is the “number” of your project. You can see this in the URL when you go to your project on the LCS website, for example https://lcs.dynamics.com/V2/ProjectDashboard/1234567. I’m hoping this can eventually be made into a dropdown selection.
  • File to upload… The build currently produces a ZIP file with a name that contains the actual build number, and that is not configurable there (you’d have to edit powershell for that). So until that is changed, there’s actually an easy way to fix that. Since your release pipeline has the build pipeline’s output as an artifact, you can actually grab the build’s build number. So, use the BROWSE button to select the build drop artifact, but then replace the build number with the $(Build.BuildNumber) variable. For example, on my test project this resulted in the following file path: $(System.DefaultWorkingDirectory)/BuildDev/Packages/AXDeployableRuntime_7.0.4641.16233_$(Build.BuildNumber).zip If your AX build is not your primary artifact, you can use the artifact alias, like $(Build.MyAlias.BuildNumber). You can find this into in the release pipeline variables documentation.
  • LCS Asset Name and Description are optional, but I would recommend setting at least the name. For example, I set the following: LCS Asset Name: $(Release.ReleaseName) LCS Asset Description: Uploaded from Azure DevOps from build $(Build.BuildNumber)
  • If using a hosted agent, make sure to use the latest host (“Host VS2017”).

Happy uploading!!

  Read more...

Aug 14, 2018 - Query Functions: SysQueryRangeUtil Extensions

Filed under: #daxmusings #bizapps

Now that overlayering is a thing of the past, how does one add methods to SysQueryRangeUtil as explained in my old post from back in 2013?

Simple. Create your own class, name it whatever you want. Add a static method as described in the old post. The only difference is that you just put an attribute on top of it indicating that it’s a query range utility method… So the method used in the old post, would now just look like this:

[QueryRangeFunctionAttribute()]
public static str customerTest(int _choice = 1)
{
    AccountNum accountNum;
    
    switch(_choice)
    {
        case 1:
            accountNum = '1101';
            break;
        case 2:
            accountNum = '1102';
            break;
    }
    
    return accountNum;
}

If you’re look for an example in the standard code, you can find class “SysQueryRangeUtilDMF” in the AOT.

  Read more...

Jun 20, 2018 - Moving to Extensions is Changing Your Mindset

Filed under: #daxmusings #bizapps

There’s a lot of change that has come with AX7 (Dynamics 365 for Finance and Operations), ranging from plentiful technical changes but also the way implementations are being run. Any large change is usually disruptive to existing users of the product, and it takes time to learn the changes and get used to them. There’s been plenty of changes in some major releases of AX, but one type of change has now come around that we haven’t had to deal with before: rethinking how we design customizations.

Now, I’m not talking about technology or code but rather DESIGN of customizations. As I’ve stated before, the extension paradigms in AX7 weren’t invented for the sake of new technology, but as an enabler to get to quicker and easier updates and upgrades. What’s largely overlooked in discussions is that in many cases this requires not just using extensions in your code, but also to design your customization to allow for easy updates. And as a result, this means that some customizations cannot be ported from over-layering into extensions just like that. But that doesn’t mean you can’t give the user the easy button she was asking for. Now that our version 8.0 of the Application is released, the era of NO over-layering is upon us. And to illustrate the points from above, I’d like to share an example of a customization I dealt with at a customer who came from AX2012 with plenty of existing code, and much of it “intrusive” (as in, not-easily-upgradable because it changes existing code). When the platform packages were originally sealed in Platform Update 2, this particular customization posed a problem for this customer. As opposed to being just one customer asking for a specific hook point (delegates, etc.) they need for one customization, we went back to the drawing board to discuss the original requirement and see if we could reimplement it in a “softer” way.

Here’s the customization. The customer in question sometimes has large purchase orders and department heads need to approve specific lines that apply to their departments. They typically filter the lines to check what applies to them, and then just want to approve the whole lot with a click of a button as opposed to each line individually. They want to do this straight from the purchase order screen where they’re reviewing all the details. In AX2012, a customization was made to the workflow framework (red flag) that adds an option to the approval ribbon. Instead of just cancel/reject/approve, a new option was added: approve all lines. It would then approve all the lines in this P.O. assigned to the user approving. The framework is locked in platform, so this change could not stay and couldn’t be done with extensions… Although perhaps not as nice as the original, the solution was relatively simple. Instead of changing the framework itself, we made a customization on the approval step. When the user approves a line, we can prompt her that there are other lines and ask if she wants to approve all. This has several clear advantages: no more framework change (which may have to be merged/changed on next upgrade), plus we use the superficial public API to approve an item (as opposed to adding logic inside the approvals itself). So when the framework changes its logic or adds new features, our customizations will just use advantage of those automatically.

This goes back to a principle I always like to adhere to in AX customizations as much as possible: don’t change the process, but automate existing processes. When a request comes in, I like to ask the simple question: how would you do this manually, regardless of how convoluted that process would be? For example, the customer needs an extra financial transaction to happen somehow somewhere in existing logic. Question: how would you do this manually? It makes them think about the process. Typically they would create some journal somewhere with references to the original transaction. Ok, so can we now just automate creating/posting that journal and make sure it can be tied back? This opposed to creating an individual extra ledger transactions inside the original process, which would be very intrusive, error-prone and would have to be reviewed with each upgrade to make sure we’re in line with the original code or changes to the transactions frameworks.

I realize there will be examples where even this doesn’t apply. But I challenge you to ask the question whether that means the customization is really a good idea at all, and how it would be impacted if the Application Suite changed some of the details of the implementation. Customers upgrading from previous releases will face these dilemmas for sure, but now is the time to rethink the design, or perhaps even question the existence of the customization entirely. In other cases, some of these intrusive customizations are done to correct strange or incorrect behavior in the application. Most of us have been there: some XPO sitting on a local share which you can re-use at every customer. And therein lies another mindset change: please file support requests! Don’t ask for hookpoints so you can correct wrong behavior, rather ask Microsoft to fix the behavior! I realize and know from experience that the “by design” answer gets very tiring very quickly, but things are changing rapidly and this is one way everyone can improve the product and reduce friction. Your fellow AX users will thank you, and your organizations/customers will thank you on the next upgrade.

  Read more...

Jun 14, 2018 - Beating the Drum on Packages and Models

Filed under: #daxmusings #bizapps

Even though I work on the product side these days and am crazy busy, I keep a close eye on the community out there. I have RSS feeds going for many blogs, I scan LinkedIn, I watch our insider Yammer groups, and I even follow an RSS feed of the official Community forum showing me all posts for AX/365 (yeah this is a lot but I scan through the titles quickly for specific things).

It’s clear to me more and more people are moving onto AX7.x and as partners and ISVs understand the new world of X++ v7.0 things run smoother, bug reports become more useful, etc. That said, there is obviously still an influx of new people (partners, ISVs and customers) who are just now starting to learn the new paradigms. So, it doesn’t hurt to go through some details again - this time I will focus on specific pain points I’ve seen people struggle with, especially when upgrading from AX2012.

#1 Rule of Thumb: A package is a mini-modelstore in and of itself, with its own set of layers and models! I can’t stress this enough. The standard application code is split up into multiple packages. If you’re upgrading your over-layering code which is neatly contained in 1 model - it will get split up all over the place. And it will be split into models in existing packages as it needs to over-layer something in that specific package. Currently the code upgrade will not create an entirely new package for you. Do not expect to see a package with your name on it, but rather expect a model in multiple packages with your name on it. If you have a 2012 model called “Joris” then you may end up with an “ApplicationSuite\Application Suite Joris” and a “GeneralLedger\General Ledger Joris” folder. You will NOT see a package “Joris” in the root. If you over-layer SalesTable - that over-layering can only be done in the package that contains SalesTable (Application Suite). (Note: you can EXTEND from any other package!) If you over-layer LedgerJournal - that over-layering can only be done in the package that contains LedgerJournal (GeneralLedger). Keep in mind over-layering of the standard code is completely disabled in application version 8.0 and upwards, so if you’re upgrading you may need to go to 7.3 first to “buy time” to move things to pure extensions.

#2 Without a successful compile of the FULL package, you have nothing When you get a new development VM, there’s an application suite (package) present, and it has been compiled (each package is a unit of compilation - i.e. it translates to an assembly DLL for that package). If I now add a new class to that package which has a compile error in it… 1) Build/rebuild from a project/solution is ALWAYS an incremental compile on the assembly. This means in the class with error example, the application suite will be there but my new class won’t, as the incremental compile wasn’t able to compile and add my new stuff. 2) Full build from the Dynamics 365 menu is ALWAYS a full assembly compile. This means in the class with error example, the DLL will be completely recompiled. Now, the Visual Studio tools keep a backup of the DLL and in case of error, put it back. So in this case essentially nothing will happen - the DLL will be the exact same as before the compile since the tooling will just put the backup copy back. 3) Be mindful of removing objects and understand that build/rebuild from a project is an incremental compile. When in doubt, do a full build of your package to ensure removed objects are gone and new objects are added! When you’ve moved your code in its own package by extending instead of over-layering in existing packages, these compiles won’t take long. If you’re still over-layering, compiling something like Application Suite will obviously take quite a bit longer.

#3 Packages consume each other using references, and these are references to the BINARY (compiled) package When Application Suite uses LedgerJournal, it can only do so because it has a reference to the GeneralLedger package where LedgerJournal is defined. But, this is a reference like any normal .NET reference. Let’s say I add a completely new package and call it CodeCrib. I add a new class called BlogPost. Now, this class shows in the AOT in Visual Studio. But if I try to use this class in some Application Suite over-layering, it gives me a red squiggly line and it won’t compile. So, I need to add a reference… From the Dynamics 365 > Model Management > Model Parameters screen, select your model in the drop-down and click Next to go to the references page, check the reference you need (to our CodeCrib package). This will add an official reference to the CodeCrib assembly (FYI this is stored in the descriptor of the package needing the reference). Now, adding the reference resolves the squiggly line in the editor. But it still DOESN’T COMPILE! What gives? Well, when you compile it’s all about the binaries! Since I haven’t compiled the CodeCrib package yet, the compiler can’t load the assembly when compiling AppSuite which references it! So when upgrading and you run a full compile, you’ll get LOADS of errors. But keep in mind that given #2 - a base package may not have compiled completely because of just 1 error. That could result in hundreds of errors in another package that depends on it! This is the way it works. Also note that these references have nothing to do with cross-references! Cross-references are used for the Visual Studio tooling, but the compiler NEVER USES cross-references for anything. In fact, it (optionally) creates the cross-reference, but it doesn’t consume it. So best tactic when upgrading is to review package dependencies, and just start with the small packages that have little or no dependencies. Get those to compile, then work your way up. Application Suite is usually the LAST package you want to fix - since it won’t compile anyway until you get its dependencies to compile properly. Once you have your own extension package, that one will likely become the last package to fix/compile, since it will depend on all the other packages it’s trying to extend.

#4 Extensions and references - one-way traffic The code upgrade does some work on moving obvious over-layers into extensions. This is ongoing work and as LCS updates and platform updates roll out, you may notice it doing more (you’ll be using code upgrade moving between minor application upgrades: 7.1, 7.2, 7.3 but not for just platform updates). Moving to extensions ultimately implies it should go into its own package. But moving something to an extension means you need a reference to use it, which may not be possible since references are one way. Let’s say we have a new field on table SalesTable. And we have some over-layering code in SalesTableType class that uses this field. Now, if we move our new field to a table extension of SalesTable and we put that extension in a new standalone package we created, then the over-layering in SalesTableType won’t find it. Now, to extend SalesTable in our package (let’s call the package CodeCrib), we have to reference AppSuite. For our over-layering in AppSuite to reference our table extension, we need a reference to package CodeCrib. Unfortunately that would create a circular reference which you can’t do. So there are two options: rework the over-layering in SalesTableType into extension as well (which ultimately you need to do), or keep the SalesTable extension in the AppSuite package for now, and move it later when you’re ready to refactor the other over-layering. For this reason the code upgrade, when it does create extensions, will put extensions in the original package where the over-layer existed - just so that you don’t have reference issues to worry about while upgrading. You can then move a whole bunch of objects in their own package when you’re ready.

#5 Conflict resolution The code upgrade creates a conflict project for overlayering code. Although this is handy, it’s important to remember that if you have code in MULTIPLE layers (let’s say, you have sys code over-layered in VAR, and that VAR over-layered in CUS), you want to fix each layer independently, and work your way up. As you rework a layer, you may move some code around. Note that doing that may cause NEW CONFLICTS in a higher layer. Because of this, I would encourage you to create your own conflict project for each layer once you’re done with a lower one to make sure no new objects have conflicts due to some refactoring you may have done in a lower layer. You can do this from the Dynamics 365 menu, under Add-Ins, Create project from conflicts. Bottom line: when refactoring code in a lower layer, YOU could be creating new conflicts in a higher layer. This is not a bad thing, just keep it in mind and make sure to check for any new conflicting objects when using customizations in multiple layers. Note that if an object was already in a conflict project, any new conflicts in the same object will just show up when you open the designer for that object. You’re more interested in new objects. The add-in for creating a project can also be useful when you’re done upgrading, just to go through each custom model to make sure nothing was left behind and you’re really done.

#5 Binaries versus code When you deploy a deployable package, it only contains binaries. There is no X++ code, only DLLs (and some other related artifacts needed). This means if you put a deployable package on a machine with Visual Studio, you will not see the code for the binaries you deployed. When you run the AOS of course you will run the binaries including your customizations. The larger issue I’ve seen is when you have overlayering code, let’s say on Application Suite. When you deploy a package, you would be replacing the application suite compiled code (binaries). But the code on the machine is still standard application suite, without your custom code. So, after deploying, the AOS is running fine with your code. However, if you now open Visual Studio and run a compile on application suite you are replacing the application suite binaries (i.e. replacing the binaries you deployed via the deployable package). Since you’re compiling the app suite without your custom code (since it wasn’t deployed there) you are ‘effectively’ removing your customizations.

For all the talk about layers here, keep in mind that the existing packages are sealed in application version 8.0 and above - meaning you won’t be allowed to over-layer any code in them and will be required to move to extensions. As such, when upgrading code you should consider the effort involved in upgrading all the code as-is, upgrading it by going through ALL the code (conflicting or not) and moving into extension. You still have the option to upgrade to 7.3 which has the application suite package still overlayerable (I just invented a new word).

And finally, if you’re new to Dynamics 365 but have 2012 experience, I encourage you to read my Design,Compile,Run blog post series. It’s hard to believe those articles are already more than 2 years old at this point…

  Read more...

Jun 13, 2018 - Accidental Code Extensions

Filed under: #daxmusings #bizapps

Ok, I’ll preface this by saying I’m very much aware that the standard X++ code in platform and application has this issue too. Thanks for letting me know :-) But as the saying goes: do as I say - not as I do…

With that out of the way… Going back in time, the 7.0 X++ language supported extension methods like C#. You create a new class with any name, but ending in _Extension. Then, you can add a new public static method, and the first parameter is the object you’re extending. So for example:

static class MyExtensions_Extension
{
    public static void foo(PurchTable _purchTable)
    {
    }

    public static void bar(SalesLineType _salesLineType)
    {
    }
}

This class adds extension method foo() to the PurchTable table and method bar() to the SalesLineType class. This feature is now less used due to the [ExtensionOf()] “augmentation” class paradigm where you can have instance method, add member variables, access protected members, etc. However, the original extension method feature still exists, and in fact many people accidentally use it!

The issue happens when adding both extension methods and event handlers in the same class. In theory this sounds great - you have all your extensions in one place. For example:

static class MyPurchTableExtensions_Extension
{
    public void foo(PurchTable _purchTable)
    {
    }

    [DataEventHandler(tableStr(PurchTable), DataEventType::Inserting)]
    public static void HandlePurchInserting(Common sender, DataEventArgs e)
    {
    }
}

This works as expected - a new method foo() is added to PurchTable, and you’re handling the inserting event. However, the unintended consequence is that you are ALSO adding a new method HandlerPurchInserting(DataEventArgs e) on the Common object! The compiler does not discriminate the fact that you have a handler attribute on that method. All it sees is you’re adding a static method in an _extension class, with one or more arguments.

So… How many methods have you accidentally added to Common, XppPrePostArgs, FormRun or FormControl? :-)

  Read more...

Nov 27, 2017 - Installing Hotfixes - Prepare vs Apply

Filed under: #daxmusings #bizapps

Quick note on installing hotfixes and common questions around the usage of the prepare option. As I usually like to do, let’s start with some background information.

I recommend a previous post I made on the topic of updates, upgrades and hotfixes.

When it comes to hotfixes, the platform is serviced as a whole. There aren’t many hotfixes for platform since it is updated monthly anyway, but when there is it’s cumulative and you install the whole thing at once as an update. Even though you get the source you don’t worry about it going into source control or building and compiling it, because essentially you have the binaries already in the update which is ready to deploy to other non-dev environments, and you always get the whole source anyway - as opposed to deltas of just changed objects. For application hotfixes however, you can get individual fixes for a specific KB you want. This will change when the application sealing comes around next year, but for now you have the option. Since all VMs come with a specific version of the application, any hotfixes you install will have to be synchronized to any place where the application may be compiled - so all dev boxes, build box etc. And of course when I say “synchronized” I mean the code for these hotfixes should go into source control.

Now, this way of doing things is slightly awkward. You get a base version of the application on the VMs, and any changes are in source control. This is nice and efficient because otherwise you’d have tens of thousands of source files in source control if you were to check-in the whole application. But from a source control perspective, this is weird. When you apply a hotfix and check in the changed object, source control will consider this a “new” object (“add”) since you just added it to source control. But of course you technically already had it to begin with. What this means is if you were to “rollback” (undo) the add from source control - the opposite of an add is… delete. So if all dev boxes synchronized the hotfix (as an “add”) and you rollback, all dev boxes when synching will say… hmm, I added this file and now it’s gone, so let me delete it. As a result, you no longer have the object at all.

For this specific reason, the “prepare” option was added to the hotfix installation. Essentially, this will look at all objects in your hotfix, and add any of those objects to source control if they haven’t already been. That way, if you want to rollback the hotfix, you have at least checked in the original version of the object (from before the hotfix) and you can roll back to that. Otherwise, you’re rolling back to a delete (opposing the “add”). Now, a few additional pieces of information are important here:

  1. An object could already have been added to source control. Because you did manually, because you had a previous hotfix on it, etc. This means that the number of objects in a pending changeset for a prepare step could be LESS than the number of objects in a pending changeset of a hotfix. A hotfix could technically also be adding objects, and new objects of course don’t exist before applying the hotfix so they also wouldn’t be in the prepare step.
  2. After doing the prepare option, you have to check-in the pending changes. This is simple source control logic but many people forget this. You want to check-in the original versions of the objects prior to applying the hotfix. If you don’t check it in before hitting apply, you won’t have that base version! Simple as that. (in PU12 a dialog box was added after doing prepare that tells the developer to make sure to check-in prior to hitting apply).

Of course this also begs the question of how to recover if you did not prepare. Or if you did prepare but forgot to check-in before hitting apply. Truly, there is no easy way to recover from this. Ultimately, you need to get the original code from another VM that wasn’t affected. So, consider these points:

  • Did the hotfix go into source control and was it synchronized to other VMs as well? (i.e. this means removing the hotfix would affect multiple machines)
  • Was it just an issue on your dev box, for example you hit apply right after prepare but haven’t checked in anything (i.e. this means it’s just your VM that needs to be fixed).

If the scope is just your machine, consider just getting a new VM. If you did check-in but no other machine has synchronized yet, you can rollback from source control (synchronize on other machines will see the add+delete which cancel out - VS will assume you didn’t have the file since you didn’t synchronize the “add”, so it won’t try to delete it) and you just need to worry about your machine. Options for recovering all come down to getting the code from an unaffected VM (either deploy one temporarily, get a VHD, or with some luck someone else may have a VM that hasn’t synced yet). The option is then to either manually copy the code - or potentially add these unaffected objects to source control from there (thus, creating your own baseline copy like prepare would have done).

  Read more...

Oct 31, 2017 - PSA: AX7 Build Failure on Generate Packages / Model Export

Filed under: #daxmusings #bizapps

More and more customers are seeing an error in the “Generate Packages” build step on their AX7 automated builds. The build shows as “Partially Succeeded” and the step that generates packages shows a problem. The following error is shown in the build summary:

Error generating deployable packages: Error: Unexpected exit code from model export: 1 At C:\DynamicsSDK\GeneratePackage.ps1:523 char:5

And going into the logs, the following details are shown:

  • Foundation Upgrade: Exporting model source…
  • Command: J:\AosService\PackagesLocalDirectory\Bin\ModelUtil.exe -export -metadatastorepath=”J:\AosService\PackagesLocalDirectory” -modelname=”Foundation Upgrade” -outputpath=”C:\DynamicsSDK\VSOAgent_work\1\Packages\Source” Model Foundation Upgrade was not found in the specified Metadata Store
  • Foundation Upgrade: Model export completed with exit code: 1
  • Exception thrown at C:\DynamicsSDK\GeneratePackage.ps1:216: throw “Error: Unexpected exit code from model export: $ModelUtilExitCode” System.Management.Automation.RuntimeException: Error: Unexpected exit code from model export: 1 Error generating deployable packages: Error: Unexpected exit code from model export: 1 At C:\DynamicsSDK\GeneratePackage.ps1:523 char:5

The automated build has an optional (turned on by default) to not just create the deployable package but also produce an export of all the models it built. The way this is done is it looks at any descriptor files for any models in source control, and those are the ones being exported (since those are also models for which their containing package are built). That said, it appears many customers add the whole Descriptor folder to source control when over-layering existing packages. In itself this doesn’t matter since the only extra work this creates is the model exports, it doesn’t add any extra compile time since any one of the descriptors would cause the whole package to recompile.

The problem lies in the “Foundation Upgrade” model. This is an old artifact and this model is actually “Disabled” (a flag in the descriptor file). As such, our metadata APIs ignore this model - you will notice even though the descriptor and model are in your packages folder, this model is not showing up in the AOT. So, when the build tries to export this model, the model utility asks the metadata API for the model, and the API says it doesn’t exist…

The fix of course is easy. The build scripts will be updated to double check the flag and not try to export disabled models. For customers or partners running into this issue today the answer is also easy - just remove the “foundation upgrade.xml” file from your source control. Note that this will trigger the deletion of that file on any dev boxes as well, but that is not an issue since this model isn’t loaded or used anyway. A conversation has also started to see if this model should be downright removed in the next application release.

  Read more...

Oct 21, 2017 - Cross-Post: Pointing Build Definitions To Specific VMs (agents)

Filed under: #daxmusings #bizapps

Dear readership,

Having a personal blog but also having opportunities to write on official blogs and documentation has its pros and cons. As I now have ownership of the AX Dev ALM blog on MSDN I will have the challenge of deciding when to post where regarding ALM related topics. I realize many people read this blog for some of the ALM related topics, so I figured I would cross-post here. I encourage you to follow the AxDevALM blog directly as it’s not my intention to keep cross-posting everything. Perhaps some PSA stuff on the official blog and in-depth or example stuff here, we’ll see.

For now, here’s the link: Pointing Build Definitions To Specific VMs (agents).

And yes, I realize we’re missing a lot of documentation on docs.microsoft.com regarding the build process. Posting there is a much lengthier and more stringent process, and blogging is just an easy, quick way to put something out there. The ALM blog is quick and painless, but it’s official. Here, I can say (almost) anything I want :-)

I am working on other blog posts (on both blogs) so keep an eye out for those!

  Read more...

 

Page: 3 of 16

Blog Links

Blog Post Collections

Recent Posts