Documentum – EMC World 2013/Momentum – Day 2 – Migrations and Upgrades

Interesting topic for 8:30 a.m. meeting after an open Vegas Bar (and a pretty good party – kudos to IIG).

Migrations and Upgrades: Introducing EMA, the EMC Migration Appliance. With Chris Dyde and Mike Mohen.  This post will present our thoughts on the presentation.

Why the need for EMA?

As we mentioned during Rick’s Keynote yesterday, Documentum is concerned that clients are not upgrading to 7.0 or the new D2 and xCP interfaces.  Significant issue is the need to migrate, given the change in object model from Webtop or xCP 1.0 to D2 and xCP 2.0.  Documentum Consulting’s response is to provide a migration tool, complete with a “counter” showing a live migration during EMC World.

Phasing Out/Marginalizing Webtop, DCM, CenterStage and eRoom

Presentation began with the restating that Webtop, DCM, CenterStage and eRoom are slowing being replaced by new products since there will be no new features in these interfaces.  To get clients to the new interfaces EMA will offer:

  • Extract, Transform & Load
  • Reporting
  • Validation
  • Cloning Scripts that take a DB (from one platform to another, etc.)
  • Sizing Spreadsheet
  • OnDemand Tools

Stat that is presented was 1.2 million documents per hour.  No specific detail on the type of documents, size of docs, renditions, hardware or other components.   EMA is processing at the database level and can preserve Doc IDs, Audit Trails while transforming objects from old object models to new ones

Major Use Cases

  • DCM to D2 life sciences
  • OnDemand and Cloning
  • Webtop to D2

EMA Components

  • EMA Cloner
  • EMA Migrate
  • EMA Morph (used for instances such as migration into D2)
  • EMA Replatform
  • EMA API – build plugins and extend the tool
  • EMA Plugins (File Share, etc.)

EMA Engine

  • Java-based
  • Uses Spring and Spring Batch
  • Web-based user interface
  • Uses mongoDB (NoSQL Database)

Migration Options

  • < 1 million objects, IIG suggests that users would copy content with data
  • > 1 million – Copy all content over in mass at end of migration
  • Morph – this is not a migration, but using another utility. This is used when changing document types during migration (i.e. migrating to D2)

DCM to D2 (Morph Tool)

  • Would need to use Change Object 3-4 times for each object type and you would lose the data associated with it
  • EMA allows migration to by-pass this by going straight to the database

Product, Solution or Tool

EMA is a tool that ONLY Documentum Professional Services Team Uses. Only used as long as PST is engaged.  Once engagement is over, EMA leaves with the team and cannot be used for additional migrations.

From the presentation, EMA can transfer workflows and attached documents.

Presently, EMA migration can only select a cabinet and migrates all the folders under the cabinet and does not provide migration of just certain document type .Future intends to be updated to allow for a DQL statement. If linked to documents in other cabinets, those documents will not be pulled currently.

Some Concerns- is Speed all that is important?

We see some pretty large flaws in regards having it productized including:

  • Delta Migrations – Most clients prefer not to be down during the migration.  Our understanding from other presentations was that EMA is a one-shot deal – use it to migrate content once but no Delta change.
  • Database Approach – We are concerned that the migration is focused on speed and not accuracy.  Not leveraging the API runs the risk that key components like lifecycle, ACL, folder, TBOs and a ton of other components might not be created (and not verified until the document is accessed).

Also, throughput varies greatly between migrations, which is why we always recommend running benchmark migrations in environments that are as similar to what will be used in the actual migration as possible.  Below are some items which can impact throughput for OpenMigrate migrations that we think would be concerns with EMA as well:

  • Number of OM threads
  • Average size of native content
  • Average size of renditions
  • Average number of document versions
  • Average number of document renditions
  • Inclusion of audit trail information or other related items
  • Source server performance capabilities
  • Target server performance capabilities
  • Physical distance between source and target systems
  • Complexity of migration logic
  • Existing target system TBOs (this can definitely have a big impact)
  • Applying overlays
  • Looking up additional information in database tables/repository
  • Writing additional information in database tables/repository
  • Complex metadata mappings


EMA is currently a “one and done” migration as part of an upgrade for D2.  We have been recommending that a migration infrastructure is something that clients need and includes other migration needs besides just for upgrades.   Some of concerns in regards to a migration that we are not sure EMA addresses include:

  • Ability to repeat the process for ongoing migration needs
  • Ability to apply business logic throughout the migration process
  • Incorrect assumption that all migrations are the same
  • Ability to address documents/data that failed to migrate
  • Ability to repeat the process for different data sources

Migration consultants spend most of their time understanding the current data and setting up the migration – defining transformations, locating exceptions, modifying migrations to handle exceptions, etc.  Faster migration time is always nice, but the time spent executing the migration is a small fraction of the overall project.  Typically consultants not only review migrated data from the backend (typically the way technical IT folks want to verify a migration) but to review migrated data and content using the front end user interface (Webtop, D2, etc.) in the source and target system. Migration and transformation requirements are often missed when only reviewing the migrated data and content from the backend. The EMA solution is so focused on moving back-end database rows, we would recommend our clients confirm there is a detailed review of the migration results from the front end interface.

Lastly, as a tool that can only be used with professional services, clients should evaluate EMA versus other migration solutions, based on their trust and confidence in the consulting resources that will leverage the tool and assigned to the project.

Let us know your thoughts below:

8 thoughts on “Documentum – EMC World 2013/Momentum – Day 2 – Migrations and Upgrades

  1. Can TSG’s OpenMigrate handle the migration that we need?

    We plan on doing a 6.6 to 7.0 or 7.1 early next year. Can OpenMigrate handle what we need if we continue to keep the Webtop front-end?

    • Anhtuan,

      Some thoughts on the migration/upgrade.

      On the front-end – you want to probably go to WebTop 6.7 SP2 (our understanding of the last WebTop) – as we mentioned in the Roadmap post.
      This will give WebTop the ability to work on D7 but will work on your existing 6.6 repository (that is our understanding).

      At a later point in time, when you are ready to change the back-end, you can either upgrade in place or migrate to D7 or D7.1. Migrate would only if you are changing hardware or want to change the object model. Upgrade in place if everything staying the same.

      Does that make sense?


      P.S. 7.0 has the better memory and session management for Windows back-ends (like you). Significant performance benefits. 7.1 will attempt same benefits for Linux and other back-ends.

    • Anhtuan –

      Of course it can! OpenMigrate does migrations at the API layer so moving to 7.0 or 7.1 will not be difficult especially if you are sticking with Webtop as the front end. A few of our clients are planning to do the same thing – upgrade the backend to 7.1 eventually, keeping with Webtop 6.7. The next major move (D2, HPI, etc.) is a big decision and will take some significant analysis and prototyping.


  2. First of all, thanks for the great blog! With regards to the live migration of EMA at EMC World here are some clarifications. 1.2 million documents per hour is not the correct figure! EMC migrated ~39 million objects in 54 hours and just 1.2 million documents with content, ~2.500 folders and 300 users. It’s not such a large migration as communicated at the EMC World. At this customer EMC copied all objects (garbage in, garbage out) without clean-up of e.g old ACL (dm_xxx) etc. We know that this is not the approach my customers are looking for!

Comments are closed.