by Michael Ochs on June 20, 2017
In my previous post I left off with a couple of questions that hinted at this post. What I want to talk about is the architectural decisions we made when developing our solution for migrating data between On-Permises instances of Dynamics CRM/365 and the Microsoft Cloud hosted version.
First, let me tell you my 5 most important design characteristic of any solution built for Dynamics 365/CRM.
So, with that in mind what we ended up building was a Dynamics Solution that runs inside of Dynamics itself as a single page HTML/JS application connected to Microsoft Azure using plugins as a proxy for server side functions.
Using plugins as a proxy isn’t a new design pattern for developing on Dynamics. It’s a tried and true way of getting around a lot of the limitations of client side code. I presented a session at ExtremeCRM last year on that very topic. However, when developing Migration Dynamics, plugin-side code execution took on an even more important role due to the ability to introduce transactions into migrating data.
Below is a simple representation of the Migration Dynamics architecture.
The way Migration Dynamics works is to split the load of the migration process between 3 systems.
Let’s take a step back and consider a migration scenario. Say, you’re migrating a Invoices and the Invoice Details from a source Dynamics instance to Dynamics 365 online. The correct order to migrate those records would be to first migrate the invoice and then the invoice details to ensure the details are not orphaned. Now consider that any number of those invoices have been cancelled or completed. In Dynamics a cancelled or completed invoice is considered read-only. That means the invoice can’t be updated even through the APIs without first being re-opened. When we migrate the closed invoice it will be marked as read-only. Subsequently, when a child invoice detail we have to re-open the invoice in order to associate the invoice detail. If this process did not happen inside of a single transaction what would happen if the invoice detail failed to write would ultimately result in an open invoice with no details. Now imagine this happens a couple of hundred times on a bunch of different entities in the system. It would quickly become very difficult to say with any confidence that the data in the destination system was reflective of that in the source system without rerunning the entire migration. For this and a number of other reasons, utilizing transactions in the target system when migrating data is ideal.
But what about the queuing of the data to Azure? Is it necessary to have the intermediate step of moving all the data from the source to a holding area in Azure and then on to the destination? For a couple of reasons it really is necessary. Here’s why.
At this point we’ve worked our way from the destination back to the source system and the process of pushing all of the data out to Azure. So, what does that look like and how does that tie in to our 5 important design characteristics? Well this is the only piece that the user sees really and therefore is the most important when it comes to our design principles. The Migration Wizard that runs inside of Dynamics as an HTML/JS Web Resource consists of 5 steps.
The wizard includes step-by-step guidance and executes the majority of its functionality server side and through web services in Azure to connect to and pull information from the target system prior to migration. Since the application runs inside of Dynamics itself you can not only monitor it from anywhere (including from your phone or tablet) it’s accessible to any user of Dynamics that has permissions.
Of course, last but not least, it will run just fine on On-Prem in Dynamics 2011 UR12, 2013, 2015, 2016 and Dynamics 365 online in the browser of your choosing.
If you are looking to download Migration Dynamics, you can find it on the Cobalt Website.