The Runtime to Historical data movement service moves data from
the Runtime database to the Historical database where it remains until it
is explicitly removed by the database administrator (DBA). Data that has been
successfully moved into the Historical database is available for retrieval
and analysis by other WebSphere Business Monitor components.
The following default configuration applies to this data movement service:
- Changes in the Runtime database tables are continuously captured and recorded
in work tables. The Runtime database tables that are being monitored are the
target tables that have been populated by the State to Runtime data movement
service.
- Changes that have been recorded in those work tables are continuously
propagated by the Apply component and applied to work tables in the Historical
database. Those work tables are not accessible by any other WebSphere Business Monitor component
and are for internal use only.
- The Apply component synchronously invokes the ETL component every time
new data needs to be processed. Depending on its schedule, which is initially
set to every 24 hours, the ETL component either processes data that is stored
in the Apply work tables or remains inactive until it is scheduled to run.
Increasing the delay between scheduled runs results in a longer elapsed time
between when data was stored in the Runtime database by the State to Runtime
data movement service and the time this data is published in the target tables
in the Historical database. Once it is in the Historical database, the data
can be accessed by other WebSphere Business Monitor components.
Anmerkung: Because of the dependency on the invocation by the Apply component
and configuration of the Apply component, an ETL component may not be able
to process new data every 24 hours (or the current default delay). The schedule
should rather be interpreted as "do not process new data for at least 23 hours
59 minutes after the last processing cycle has finished."
- Any data in the Apply work tables that has been successfully processed
by the ETL component is removed by the Target Life Cycle component according
to its schedule. By default, this component runs every 24 hours. Increasing
the scheduled delay causes the work tables to grow larger. Decreasing the
delay can also cause contingency problems because multiple data service components
might try to update and access the work tables concurrently.
- Data that has been successfully moved from the Capture work tables to
the Apply work tables is automatically removed from the Capture work table
by the Capture component every 5 minutes.
- Each time the Capture work tables are pruned, the Source Life Cycle component
is invoked. This component is also schedule based. It only removes data from
the source tables in the Runtime database that has been marked ready for deletion
by the Monitor Server and has remained at least 24 hours in the Runtime database.
The default pruning interval is set to 5 minutes. If the Life Cycle component
pruning interval is set to a value that is lower than the pruning interval
of the Capture component, pruning is based on the Capture component pruning
interval.
For example: The Capture component pruning interval for work
tables is set to 5 minutes, and the Source Life Cycle component schedule is
set to 1 minute. Five minutes have to pass before the Capture program can
start its pruning cycle. Because the Capture pruning routines are not activated
for 5 minutes, the Life Cycle component is not invoked. After 5 minutes have
passed, data is removed from the work tables, and the Source Life Cycle component
is invoked and removes data from the source tables in the Runtime database.
The default configuration can be changed.