The two most common questions that I’m asked at the beginning of projects I’m a part of are: Why doesn’t Cobalt automate the data migration process? And, can our staff migrate the data instead of Cobalt to save on the cost of the project? Fortunately, or unfortunately, the short answer to both of these question is ‘no’.
Why doesn’t Cobalt give customers the flexibility to automate or run their data migration themselves? Generally, when an association changes customer relationship management (CRM) or association management software (AMS) systems and/or vendors a large amount of data that is critically important to normal business operations has been gathered over years, or even decades, in their original system. If this business data is not brought over in a coherent and functional way into the new system, then an implementation will fail because the organization will not be able to resume normal business operations. This fact gnerally precludes handing off the data migration to either a migration tool, or insourcing it to association staff as part of our data migration best practices and procedures which are explained below.
Extract. Transform. Load. These terms sound like they should be associated with conducting mining operations or major industrial processes, but they are also the three major processes needed to perform a successful data migration. To back up a step, a data migration is a process of transferring data between any combination of data systems, data formats or data storage types. It is an important part of any project where an organization is switching systems where data is a component.
The extraction portion of a data migration is the process that produces a copy of the original data in a format that is usable in the environment that will transform the data. Since Cobalt generally doesn’t receive access to the source data system, the extraction of the data is handled by the association or the IT staff of their hosting company. An important criterion of successful data extraction is that all shipped data is valid and readable in the system that will perform the Transformation and Loading processes. Since all of our CRM implementations run on Microsoft SQL Server, extracted data needs to be compatible with that database platform. Extraction from other Microsoft SQL Server based systems is as simple as taking a backup and uploading it to an FTP site. We’ve also performed migrations from data derived from Oracle database files, Access database files, or even from basic spreadsheet files. Cobalt is able to accommodate any uncorrupted source file configuration. A successful data extraction is crucial as any data migration steps that follow rely upon rely upon correct, and functional source data.
Transformation, in the context of a data migration, is the application of a series of programmatic rules to the extracted source data to arrange it into a configuration for loading it into a destination system. Determining how data is transformed is where the bulk of the data migration work takes place for both the data migration developer and association staff. It can involve translating values, translating data types, deduping records, aggregating or deriving calculated values, generating new identifiers or records for new functionality, splitting fields into multiple columns or tables, and transposing data from fields to rows and vice versa, among many other possible functions and operations we’ve performed. An important part of transforming data is also determining which data does not need to be migrated into the new system, since it may be redundant, inconsistent or unnecessary. We utilize direct SQL code to enforce all the necessary rules to transform source data into the data for a new system as that is the most efficient methodology to meet client needs.
Loading data is the final phase in the data migration process, and is the process of directly inserting transformed data records into the new CRM system, which can be accessed and edited by users in the new CRM. Once data is loaded, the migration process is complete.
Copy Before Pasting
As part of the ETL process, transformed data is staged in staging tables before it is migrated to its final home data structures. A staging table is a copy of the data for a given destination table after completing all necessary transformations. We stage data for several reasons.
- Staged data provides an important step in verifying that transformed data will fit the CRM architecture, it forces the migration to apply all data type constraints onto our transformed data that will ultimately be enforced in the destination data architecture.
- Staged data is controlled entirely by the transformation rules of the migration. To present migrated data to clients, we add it to their user acceptance testing (UAT) environment. This environment is also where staff perform the bulk of their testing and training on the new CRM system. Having a staged version of the migrated data allows us to check that our transformations are correct irrespective of what testing users change in the destination data. At the same time, if migrated data needs to be removed and reimported, staged versions of that same data allows us to make the necessary references to remove it.
- Transforming data often removes the identity references from the original source data to make it conform to the destination database structure. An intermediate staged table can preserve the original identity references from the source data, allowing direct references between source and destination data, with staged data acting as an intermediary between the two. Maintaining a reference path is important during the migration process because it allows updates to our transformation rules to be applied more easily to destination data based on user feedback.
Staging data has been a reliable strategy to ensure that our transformations are working as intended to meet customer demands.
Why Send a Person Instead of a Machine?
In the data migration process outlined above there are three considerations that favor handling data migration requirements by a developer rather than an automated tool. The first consideration is that this process only has to be executed once, because the use of the original source system is discontinued after a successful project implementation. This means that there are no efficiency gains from automating this process for multiple uses for each organization.
The second consideration is that data migration automations can become difficult to run and troubleshoot if the source data does not completely conform to initial expectations. At the same time however, every organization that we migrate onto our CRM solution uses their current system somewhat differently. These discrepancies from our previous implementations don’t allow us to implement a one size fits all approach for migrations from each source system. Specifically, we have performed over 250 data migrations from a wide range of CRM and AMS systems including iMIS, Abila netFORUM, GoMemnbers, Protech, Euclid ClearVantage, Cvent, Rapattoni Magic and LAMPS as well as migrations from a multitude of custom made or less widespread AMS/CRM/Database systems. Each of the implementation processes from these source systems have improved the performance and experience of the data migration developers on staff, and each previous migration also provides a template for future migrations from each vendor’s CRM/AMS, which shortens the initial search for most of the important client data to migrate. Previous data migration projects from these systems therefore provide a platform for further data migration projects from the same systems to be faster, more thorough, and cheaper for our customers.
Lastly, developing and troubleshooting data migration automations would likely take as long as if not longer than developing new data transformation code as needed, on top of our migration template code because a data migration automation would have to be taken apart and recoded in response to new feedback, and then rerun in its entirety to ensure that any changes did not cause further issues. Executing template programmatic data migration code skips the need for disassembly, reassembly and testing of the automation process.
The Case Against Insourcing
We do not allow customers with significant data requirements to migrate themselves. The do-it-yourself approach does not generally lend itself well to this part of an implementation project for four reasons.
- Customers do not know the database architecture into which data will be migrated, and will likely not know how data needs to transformed to be loaded correctly.
- Many customers do not know the fundamental architecture of their current system, which compounds the problem of data transformation.
- Cobalt staff have more experience in transforming data from numerous data sources into CRM compatible records.
- In order to avoid security and supportability issues for our clients, Cobalt generally does not grant direct server or database engine access to customers.
However, if there is a relatively light amount of data to import into the system, customers have directly added their data successfully to the system using the built in data import tools available in Microsoft Dynamics CRM.
Three further best practices include running the data migration project as a smaller parallel part of the requirements gathering, development and testing components of an implementation project. This allows for feedback received from the functional side to inform the data migration, in the case where new fields and record types need to be migrated in response to new functionality being added, and vice versa, as when a piece of data needs to be migrated that directs development on new fields or entities that need to be created to house that critical data. Data migrations into Cobalt products follow the formula that data corresponding to our base functionality is migrated first, and any data corresponding to custom functionality and database architecture is migrated later. The result is that the two sides of an implementation refer to and reinforce the other throughout the course of a project.
Subject Matter Experts Matter
While Cobalt may have experience migrating data, developing custom functionality and overseeing implementations, client staff participation is always vitally important to a successful data migration because we do not know your data. Association staff is incredibly important in ensuring that the transformed data correctly corresponds to what is in the source system. Our implementation teams cannot fill this role because we do not know what data is important, and what data is unimportant. Just like in defining the functionality of the implementation, the customer needs to be able to call the shots to ensure that the data migration is producing the expected results to meet customer needs. If due diligence isn’t maintained by staff during the data migration, assumptions are made in their place, which usually produces incorrect results, or the data migration stops altogether, derailing the project. What this boils down to is: your membership staff knows your membership data; your accounting staff knows your accounting data; your certification staff knows your certification data; and Cobalt does not know your data. Staff need to actively be involved in ensuring that data transformation rules meet their needs and produce the correct results for the data that they own, and are ultimately responsible for.
Making a Map
The knowledge transfer necessary to determine how data is transformed occurs through establishing an initial data mapping between the source system and the new system. This initial data mapping is accomplished through meetings between subject matter experts and a Cobalt data migration developer to establish which data is important, what it represents, and how it will fit into the new system. Further regularly scheduled meetings between subject matter experts and the developer are convened to review how data has been already migrated into the UAT system, and make the necessary updates and adjustments to the data to meet customer expectations. During this process missing data structures are sometimes identified that inform the functional development of a custom project, providing an important feedback system for any missed functional requirements.
Regularly scheduled data meetings provide two important functions to ensure the data migration portion of the project is successful: accountability, and efficient communication. Maintaining a series of scheduled deadlines before and production AMS launch is invaluable for ensuring a data migration is on track to meet the ultimate deadline of the project as it gives immediate target dates for customer generated feedback and for feedback to be applied by Cobalt to their testing environments. Communicating directly through video conferencing assists with efficiently articulating the needs of customers along with the realities of their data by allowing a data migration project team to all easily view the data together, as it appears in SQL Server. Seeing data on a more basic level provides participating customer staff with the understanding that their data may display differently through their AMS applications than how it actually exists. Having all data stakeholders meet directly also incites fast communication, which is necessary when the issues at stake demand granular decision making. If communicating client feedback and development progress was through email or another tool, the data migration portion of the project would take more time, and end up costing customers more as well. All data meetings are recapped with a summary of customer provided feedback, and applied feedback to keep the project transparent and the data migration project team on track. Once all feedback has been addressed and the data in the new CRM meets all customer needs at the end of this process, the same exact code that was used to produce the agreed data transformations is applied the most recent data from the source AMS, and allows a smooth transformation into the new AMS system.
Implementing a new system for any organization is a daunting task. In addition to the data migration, there’s documenting, building and testing new functional requirements, training customer staff on the new system, and deployment. In the grand scheme of an implementation project, the data migration is usually a smaller component of the work involved. However, by now you should have a better idea how this important part of the project is managed and executed at Cobalt, and hopefully informs you in your implementation research about how your data needs will be met.