How does it work?

The ETL (Extract, Transform, Load) migration process involves extracting data from various sources, transforming it into the required format, and loading it into a target system. This process ensures data is cleaned, enriched, and optimized for seamless integration and use in new environments

1. Data extraction

Extract data from your legacy system, database, application or files on a periodica basis which is used as our anchor point to perform the migration. You can combine various sources further in the ET Process. Extraction ensures that all necessary data is gathered without losing integrity or introducing errors.

Currently, our UI supports connections with SharePoint, SQL Databases, and File Repositories. However, it's worth noting that our tool can connect to additional sources, which we plan to integrate into the UI in future releases.
Read more

2. Data mapping

The Legacy Mapping step converts extracted data into a format compatible with the target system before any cleansing or validation. This step lays the foundation for the data to be migrated. We can choose to migrate values one-to-one, assign default values, transform base values, or generate new values.

For large datasets, our auto-mapping feature suggestively assigns fields to the target system. Additionally, you can configure mappings in bulk using our Excel add-in.
Discover more

3. Migration scoping

In this step, we apply business rules to determine whether a record should be migrated to the new system or left behind. These rules are gathered from various business departments and assigned a logical migration weight. Each record is evaluated against these rules, and the results are reported back to the business, detailing the scope and reasoning for each decision.

We typically begin with a 'Migrate All' or 'Migrate None' rule and then build out additional rules from there. By default, we include a rule to maintain the state from the legacy system, ensuring that if a record is deleted before migration, it is also removed from the master data set during this step.
Read more

4. Transformation and validation

First, we gather the business and target system rules, which constrain the dataset and ensure that all mandatory requirements are met. This includes verifying that field options exist in the new system, mandatory fields are completed, structured formats are followed, and dependencies are met.

Next, we select and transform the data from our legacy table, applying these rules to determine if the data is ready for migration or if further data improvement is needed.

The list of validations can be quite extensive, but we can provide insights to help you get started with systems such as Dynamics 365, Navision, and SAP.
Discover more

5. Load file generation

For each entity in scope, we generate various load files, each requiring different levels of attention. The first type provides data in a reviewable format before loading. The second type highlights potential load file issues, detailing necessary changes to improve data quality. The final type includes actual candidates ready for loading into the target environment.

As implementation teams progress through different environments—such as development, acceptance, pre-production, and production—there is often a need for varying load files. In this step, you can customize your load files to meet the specific requirements of the target environment at each stage of your project.
Read more

Curious about our tool? Let’s get to
work!

Need a Data Migration Team, the expert tool for your existing team, or a blend of both?

decorator