Documentum & Alfresco Migration Strategies – What Factors Should be Considered?

As we’ve mentioned in a previous blog post, we often meet with potential clients who are interested in using OpenMigrate to meet a migration need but they haven’t yet identified specific migration requirements beyond the need to move documents from one location to another.  It is important to identify requirements and an overall migration strategy as early as possible as migrations provide a great opportunity to “clean up” content management systems.  This “cleansing” can be through object model updates, metadata cleansing, deletion or archival of obsolete documents, or other business related decisions.

Continue reading

Supersized Documentum Migrations and Upgrades Two Billion Documents and Counting

Two weeks ago we completed several of the largest Documentum migrations and upgrades we’ve ever seen. With short outage windows, we helped plan and support our client’s migration of their Documentum systems from a data center in the southeast US to the Rockies while simultaneously upgrading the repositories from Documentum 6.5 to 6.7 SPx. Altogether the repositories contained over 2 billion documents,  several TB of file server and multiple Centera devices; as well as over 425,000 ActiveWizard forms!

Continue reading

Documentum Migration to Alfresco – Development Environment Comparison

Beyond the features and look and feel of ECM user interfaces discussed in our initial posts, we recognize the importance of a stable and robust development environment necessary to facilitate the customization, support and maintenance of an ECM platform.  We also recognize that organizations who use Documentum today likely employ staff with Documentum specific skill sets, and they are probably wondering how these skills might transfer into the Alfresco world.  This post provides an overview of the commonalities and differences between both development environments.

Continue reading

Documentum Migration to Alfresco – Part 1

2010 seems to be the year many of our Documentum clients are deciding (or considering) to migrate to Alfresco.  For this post, we will try to address the reasons leading to this trend.  This will be the first of several posts on the subject with follow-up posts providing more technical, application and industry examples.

First a Disclaimer

Before diving into the topic, we should state that this post is not written as a “Why everyone should migrate from Documentum to Alfresco” but more of a description of why some clients are moving or considering the move.  TSG, was an active Documentum partner from 1996 through 2010, and is still very committed to the Documentum platform and our solutions running on both Documentum and Alfresco.  We continue to be impressed with our engineering contacts at Documentum and their client support, through EMC World/Momentum and the User Groups.  As presented below, the decision on Documentum versus Alfresco is fairly complex and involves consideration of technical, development, software costs, maintenance costs as well as just relationship issues.   Every Documentum user needs to understand that Alfresco is not necessarily better than Documentum, just different.

Why are Companies considering migrating this year and not last year?

When asked, many of the responses from our Documentum clients fall into the following categories: Continue reading

Migrating From Documentum with OpenMigrate: Best Practices

OpenMigrate, TSG’s open source migration framework, supports migration to and from a number of ECM repositories, including Documentum, Alfresco, FileNet and SharePoint (and many others).

We’ve designed OpenMigrate to perform several different types of migrations:

  • Between different instances of the same ECM product (e.g., during an upgrade)
  • Out of one ECM technology into a “neutral zone” (usually a file system and database or text files for metadata)
  • From a neutral zone into an ECM technology
  • Out of one ECM technology directly into another (e.g., Documentum to Alfresco)

While the general pattern of migrating content and metadata is consistent across all of these migration types,

Continue reading

OpenMigrate: Bulk Load Interface Available for Download

In the content management world, users often require an “all-in-one” interface to help them assemble batches of related documents, tag them with metadata , then import them into the underlying Content Management System.  Traditional web-based interfaces, such as Webtop or Web Publisher from Documentum, Explorer or Share from Alfresco, don’t offer this functionality out-of-the-box.

Could TSG’s OpenMigrate open source migration framework fit this need?

OpenMigrate is typically used in a batch mode, especially in high-volume situations.  And while the browser-based Administrator does allow for interactive configuration and execution of migrations, the actual tagging of documents with metadata is outside of its realm.

To bridge this divide, TSG is pleased to announce the availability of the new OpenMigrate Bulk Load Interface for download.

OpenMigrate Bulk Load Interface

The OpenMigrate Bulk Load Interface provides a highly configurable user interface which enables the user to easily manage documents and import them into a variety of targets.  The interface is built on top of the Spring framework making it extremely easily maintainable, customizable, and configurable.

To learn more about this versatile new entry in the OpenMigrate toolset, please visit www.tsgrp.com.

Documentum Migrations – OpenMigrate Sucessfully Moves Half Million Records

Today, a large scale data migration was completed for one of TSG’s pharma clients using TSG’s OpenMigrate tool.  Roughly 560,000 records and 175GB of data were migrated at an average pace of 2 documents per second.  The entire migration completed in less than two weeks.  Records from the client’s previous Sybase Database and FTP based system were migrated to a new Documentum repository.   This effort is in conjunction with a new document management system developed for the client.

The records in the legacy system had 105 document types each with a set of attributes.  This required configurations for each document type in OpenMigrate, utilizing the feature of unique mapping schemas.  The migration ran on a Unix environment producing detailed logs of the transfer.  While processing over half a million records, OpenMigrate detected 11 documents that were missing content on the legacy system and required further attention.

This migration utilized OpenMigrate 1.3 with no customizations made to the core product.  Configurations for the mapping of attributes as well as locations of the source and target systems were the only modification needed to leverage OpenMigrate as a migrating tool.

Documentum Upgrade – Inplace or Migration

For many Documentum customers, deciding how to upgrade a Documentum system often boils down to whether or not to upgrade in-place with a clone or just leave the environment alone and upgrade it in-place on the existing hardware. This year, I worked with a client on a project to explore the differences between upgrading several Documentum systems in place versus migrating the documents straight to a new 6.5 installation. Many of the in-place upgrade complexities were due to the older database and OS.

  • Oracle needed to go from 9i to 10.2.03 as well as be converted to UTF-8
  • The Unix OS needed a significant upgrade, including the rack supporting the virtual partitions
  • The Documentum Content Server required several upgrade steps. It needed to go from 5.2.5 (some 5.2) to 5.2.5 SP5, then 5.3 SP6, and finally to 6.5. I then did a separate upgrade to 6.5 SP2.

There were several project goals that could only be achieved with a migration strategy.

  • Combine Repositories on Windows installation and move to a single UNIX installation
  • Reorganize object model by flattening object hierarchy
  • Undo custom folder configurations created many years ago

The technical complexities of upgrading in-place from 5.2.5, and the need to merge Documentum repositories, led the client to pick a migration approach for the upgrade

Based on TSG’s upgrade experience with this client and others, we created an upgrade planning guide.

The planning guide is available here.

Please let me know your thoughts below.

Documentum Upgrade – High Volume Server – A Basic Understanding

Documentum High Volume Server (HVS) is a new product designed to cut database space usage in Documentum 6.5 by a third or even up to one half depending on the type of content.  Given the significantly reduced database size, overall performance should increase.  This year TSG evaluated HVS for a client as part of a Documentum Upgrade.  (See other thoughts in our Documentum Upgrade Planning Guide 

HVS – When to use it

Basically, HVS was developed to efficiently store non-changing static or immutable content and meta-data.  A good example is scanning/imaging but COLD and other content/meta-data that will never change makes sense as well. Content stored using HVS should not need to be versioned, rendered, annotated or  changed. Otherwise,  HVS converts the object from a light weight object back to a normal Documentum object and the benefits of HVS are lost.  Examples of content that are ideal for HVS include reports, invoices, check images, documents archived for historic purposes and reference, and emails.    

HVS – How it works

HVS reduces the size of the database by sharing security and common meta-data amongst a set of lightweight objects. HVS can also partition the database to increase the rate content can be stored and retrieved. There are some limitations placed on the content to achieve these benefits. First, security is applied broadly to a lightweight object type. This results in all documents of a lightweight type being available to all users that can access the type even though a user may only need access to a portion of the documents. In other words, HVS cannot support the normal object-level ACL security and accordingly security may need to be built into the application layer.  The other limitation, as already mentioned, is that documents cannot be versioned or changed.

If you need to make large volumes of content available in near real time, the rapid ingestion feature of HVS may be of interest. Using special HVS DFC functions, applications can load raw database tables that contain the meta-data information for your lightweight object types. This is very different than typical DFC applications that work strictly through the Documentum object layer.  To use rapid ingestion, a custom program is necessary (Documentum does not have any tools that currently support this, including Captiva), the DBA will also need to partition the database tables. The partitioning allows the data to be loaded into “offline” Documentum tables. The tables are then swapped with empty place holder tables making the newly documents available  while the Content Server stays up and running.

With a partitioned database, other new tricks are available in the HVS DFC to scope searches to particular database partitions. This can be handy if the system is very large and the user community is having unacceptable metadata search performance times. 

WHERE TO GO NEXT

When considering HVS – users should keep in mind specific points

  • Cost of HVS (will vary by installation)
  • Performance Benefits versus normal database tuning
  • Ingestion program development as this would be custom HVS DFC calls

In relation to the ingestion process, TSG has added support to HVS in OpenMigrate to help clients ingest new content as well move existing content to HVS.  One benefit of this approach is that one tool can be used for ongoing ingestion of new content while also being able to support movement of existing content within the docbase (ex: archived items).

With our client, the proof of concept went well but the client didn’t quite realize that HVS required  additional cost and licensing.  In evaluating the benefits versus the cost, the database and Documentum support requirements did not outweigh the benefits and the client did not move forward with HVS.