Data movement services

A data movement service enables an application to move data from a source database to a target database. Source and target databases can be homogeneous or heterogeneous: that is, residing on a single system or distributed across multiple systems. Aside from moving data, a service can transform data and provide basic data life-cycle functionality as required by the application.

Data movement services are implemented by five major components:
  1. (Source) Capture component
  2. (Target) Apply component
  3. ETL (extract, transform, load) component
  4. Source Life Cycle component
  5. Target Life Cycle component
The Capture and Apply components work together to move the data from the source database to the target database. The ETL component performs any necessary data transformation if the data structures in the source database are different from the data structures in the target database.The following diagram illustrates the process flow within a data movement service:

Data movement service process flow

The data movement service flow follows these steps:
  1. Data in source tables is stored and frequently updated, for example, by the Monitor server. The Capture component records, in the work tables, any data changes that are made to the source tables.
  2. At predefined intervals, the changes are identified by the Apply component and recorded in the work tables.
  3. After changes have been recorded successfully, the ETL component is invoked.
  4. Using the data that is stored in the Apply work tables and predefined rules, the ETL component performs any necessary transformations. Data that has been successfully transformed is written to the target tables. Any incomplete or erroneous data is retained in another set of work tables for later processing.
  5. Upon completion of the ETL processing, the Target Life Cycle component is activated.
  6. Over time, large amounts of data can accumulate in the Apply work tables. Any data in those tables that has been successfully processed by the ETL component is removed by the Target Life Cycle component.
  7. Once the data has been successfully copied to the target database, it is no longer needed and can be removed from the Capture work tables. The Capture component prunes the work tables periodically to reduce the resource contingencies.
  8. Removal of data from the Capture work tables triggers the invocation of the Source Life Cycle component.
  9. Any data that has been successfully processed, marked as ready for deletion, and has passed the Source Life Cycle retention Policy is removed from the source database.
The Capture component and the Source Life Cycle component usually reside on the source system; the Apply, ETL and Target Life Cycle components reside on the target system as shown in the following figure:

Source and target databases

Within a data movement service, multiple instances of the components may be used, depending on the data structures used in the source and target database. The number of component instances is directly related to the number of business measures groups and the number of source and target tables within a Business Measures-Modell. Each instance is uniquely identified. The following rules apply withinWebSphere Business Monitor: A component instance can be, for example, an executable program, a database stored procedure, or a database trigger.
Two instances of data movement services are used in WebSphere Business Monitor:
The State to Runtime data movement service processes data that has been stored by the Monitor server in the State database and moves that data into the Runtime database where it can be accessed by the dashboard. The Runtime to Historical data movement service moves data from the Runtime database to the Historical database.The following diagram illustrates this movement:

Data movement services

The following information describes the default configurations for these services and how to configure, start and stop, and monitor them.


Copyright IBM Corporation 2005, 2006. Alle Rechte vorbehalten.