This edition applies to version 6, release 0, modification 0 of IBM Rational Asset Analyzer and to all subsequent releases and modifications until otherwise indicated in new editions.
The Rational Asset Analyzer database migration utility is a Java-based tool for migrating the Rational Asset Analyzer database to the latest schema level. Use the migration utility to preserve your data in the database after installing a new level of Rational Asset Analyzer.
While the migration utility is intended to be release independent, you should always use the newest version available from the product website in the event fixes have been made to older database migrations. Updated versions of the migration utility will always be able to perform older migrations.
There are two phases to complete the database migration using the migration utility.
The migration utility has the following features:
As you are planning your migration, you may have questions, concerns, or circumstances specific to your environment that you should resolve before implementing your migration plan. You can contact IBM Software Support for assistance. The IBM Rational Asset Analyzer forum is another method for asking questions and obtaining assistance. The forum is located at the following URL: http://www.ibm.com/developerworks/forums/forum.jspa?forumID=2046&cat=24
Before you download and start the migration process, verify that you have the required software and authority needed to run the migration utility.
java -fullversion
using dmhRunstats.rexxIf you are not sure the statistics access paths are current, use the RUNSTATS utility to update the statistics to the current data.
Complete the following steps to download the migration utility and complete changes required to use the utility.
/config/*.cfg /lib/*.jar /META-INF/*.mf dbutil.bat SetupJdbc.bat /migrate/*.xsd /migrate/v000/ ... /migrate/v601/*.properties and *.xml /verify/ /verify/v220/ ... /verify/v601/*.xml
dbutil.bat -getInfoYou should see output similar to the following:
JVM version detected: [#.#.#] Url: jdbc:db2:<database-alias> Version: ### OS: DB2_LUW
The dbutil.bat is used to create a migration plan that will be run to make the database changes. Changes are not made to the database until the migration plan is executed.
To create the migration plan, issue the following command from the migration utility directory:
dbutil.bat -createMigratePlan targetVersion
See Appendix A. dbutil.bat command reference for details on specifying the dbutil.bat command.
After the create migration plan process completes, an XML file is created for each schema level between the starting and target versions. For example, if the starting version is v552 and target is v600, the migration will contain a plan for v554 and v600.
During the creation of a Migration Plan, the migration utility reads a version-specific /migrate/version/Migrate.xml file and creates an XML file for each version in the migration path.
The XML files are placed in a directory named migrate_plan. If a staging directory is specified in the config/User.cfg configuration file, the XML files are put in the staging directory.
The XML files are named Migrate_version.xml. Each migration XML file contains a list of SQL statements that will be executed by the migration utility.
Before continuing review and update these files to handle any environment-specific requirements, such as modify the SQL to handle especially large tables that might fill up the log file, update or add INDEX statements if you have performed customization.
Details for the /migrate/version/Migrate.xml file are described below for informational purposes only. Do not modify these files unless instructed by IBM Software Support.
The /migrate/Migrate.xml files have the following format:
<TaskSequence version='version-number'...> <!-- Define vars used in this file --> <Vars> ...zero or more variable definitions... </Vars> ...zero or more task definitions... </TaskSequence>
The following is an example of a Migrate.xml file followed by a corresponding Migrate_version.xml file.
<TaskSequence version='v311'...> <!-- Define vars used in this file --> <Vars> <Var name='dbName' value='$C{User.Database.name}'/> <Var name='tschema' value='$C{$User.Database.tschema}'/> <Var name='tsprefix' value='$C{$User.ZOS.tsprefix}'/> ... </Vars> <Task class='Sql' desc='Create a new table.'> CREATE TABLE ${tschema}.DMH_NEW_TABLE1 ( COLUMN_1 INTEGER NOT NULL, COLUMN_2 INTEGER NOT NULL, PRIMARY KEY (COLUMN_1)) %IF[${isZOS}] IN ${tschema}.${tsprefix}%COND[${singleTS}==true?TBS:NEW] %ENDIF ; </Task> <Task class='v311.ContainerUpdate' desc='...'/> </TaskSequence>
Given the above file, the corresponding generated /migrate_plan/Migrate_v311.xml file for a z/OS database with distributed tables enabled might look like the following file:
<TaskSequence version='v311'...> <!-- Define vars used in this file --> <Vars> <Var name='dbName' value='DMHDB'/> <Var name='tschema' value='DMH'/> <Var name='tsprefix' value='DMHTS'/> ... </Vars> <Task step='v311.1' class='Sql' desc='Create a new table' autocommit='true'> CREATE TABLE DMH.DMH_NEW_TABLE1 ( COLUMN_1 INTEGER NOT NULL, COLUMN_2 INTEGER NOT NULL, PRIMARY KEY (COLUMN_1)) IN DMH.DMHTSNEW ; </Task> <Task step='v311.2' class='v311.ContainerUpdate' desc='...' autocommit='true'> </Task> </TaskSequence>
The following transformations are done from the /migrate/Migrate.xml file to the /migrate_plan/Migrate_version.xml file:
Before you create the plan, you should have a migration plan strategy and understand limitations associated with the migration utility.
To ensure a smooth migration, ensure that you understand the entire process before creating and running a migration plan. Review the entire migration utility instructions contained in this document. There are considerations for pre-migration, migration, and post-migration. It is best to be prepared for each step in advance. The Database schema changes topic contains details related to product changes that affect data. It also contains details about the database changes for each release.
Because the migration utility does not create a backup copy of the Rational Asset Analyzer database, there is no overall rollback feature. There are several intermittent commits that allow for some recovery. In case there are problems, you should create a backup of your database to recover data.
You can choose as a strategy to run the entire migration plan at once or it can be done in steps. A simple migration, such as dropping obsolete objects or creating a small set of new object is appropriate for running the entire migration at once. However, for more complex migrations, the step approach is a better choice. The following are examples of a complex migration:
The step approach makes it easier to recover and restart the process when an error occurs. When you run the migration plan in a step-by-step fashion, you can monitor DB2 resources in the environment for each step and fix any necessary issues. When using this approach, never run a Task step out of sequence.
After the migration plan is created and verified, it can be started for the changes to be made to the database.
If you have not already made a backup copy of your database, create the backup now before proceeding.
To start the migration plan, issue the following command from the migration utility directory:
dbutil.bat -runMigratePlan [task_step]
See Appendix A. dbutil.bat command reference for details on specifying the dbutil.bat command. After the migration utility starts, the current level of the database is determined based on the existence of specific tables, columns, indexes, and triggers.
The migration utility checks to see if there are any extra or missing tables, indexes, columns, or triggers and reports any discrepancies. Messages are written to the console and dbutil.log file located in the migration utility installation directory. The dbutil.log file contains messages from the last command executed. Messages from the next to last command execution are in the dbutil.log.2 file.
During the migration process, the database schema level is stored in the DMH_SYSTEM_ATTR table at the completion of each migrated schema level. Schema verification is performed at the start of most commands. The migration utility uses the schema level to verify if the database is in-sync for the level indicated. If not, detected object that are not recognized are flagged as not recognized and a message is written to the log file.
If there are any errors that occur while implementing the migration plan, contact IBM Software Support before proceeding.
After the migration plan implementation successfully completes, there are several tasks that must be performed before the data base migration is complete.
REXX dmhRunBinds.rexx database_name schema_nameReview the output in the /log/dmhRunBinds.log file and confirm that the bind commands were successful with no errors.
REXX dmhRunstats.rexx database_name schema_nameReview the output in /log/dmhRunstats.log and confirm that the RUNSTATS commands completed successfully.
Depending on the schema level the database is being migrated to, there might exist some special circumstances that require a manual effort, post migration, or both for a database to be considered fully migrated and ready for user activity.
Remember that the migration path is dependent on the current version of the database and the level being migrated to. You should review the changes starting with the "from" version level and each interim level until you reach the final target version level. For example, if you are migrating from For example, if you are migrating from v552 to v600, you should review changes for v554 and v600.
The following notes highlight particular changes and user actions, when required, for specific migration levels. This information should be reviewed and understood prior to the start of any migration effort.
The z/OS Scan & Load Technology Preview for Rational Asset Analyzer V5.5 Fix Pack 1 required a new PDS Resource Manager be manually inserted into the database if IMS Subsystem, Assembler source data, or both was to be imported. When migrating to database schema v552, this row will be removed if found, and the migration utility will proceed to insert new rows for PDS, ASM, CICS DB2 and IMS resource managers as required for Rational Asset Analyzer V5.5 Fix Pack 2.
Tables | Primary Key Index | Alternate Index |
---|---|---|
DMH_BP_MODEL | MODEL_ID | |
DMH_BP_PROCESS | MODEL_ID + PROCESS_ID | |
DMH_BR_POLICY | POLICY_ID | |
DMH_BR_POLICY_RS | PLOICY_ID + RULE_SET_ID | |
DMH_BR_RULE | RULE_ID | |
DMH_BR_RULE_SET | RULE_SET_ID | |
DMH_BR_RS_RULE | RS_RULE_ID | |
DMH_BT_CATEGORY | CATEGORY_ID | |
DMH_BT_RELATD_TERM | TERM_ID_1 + TERM_ID_2 | |
DMH_BT_SYNONYM | TERM_ID_1 + TERM_ID_2 | |
DMH_BT_TERM | TERM_ID | |
DMH_FILE_LITERAL | LITERAL_ID + FILE_ID | RDILTF2 |
DMH_FILE_SYMBOL | SYMBOL_ID + FILE_ID | RDISTF2 |
The following are new component types in the v552 database schema:
Table | Constant String |
---|---|
DMH_QUEUE_TYPE | 18, Load import file |
DMH_RESOURCE_MNGR |
|
DMH_TOOL |
|
Table | Constant String |
---|---|
DMH_QUEUE_TYPE | |
|
|
DMH_SYSTEM_ATTR | 500, 552 |
Table | Constant String |
---|---|
DMH_LIBRARY_TYPE |
|
To provide support for multiple named tables in SQL DECLARE CURSOR statements, the SQL_TABLE_ID has been dropped from the DMH_SQL_CURSOR table to eliminate the limitation of associating a single DB2 table to the cursor. The potential 1 - n DB2 tables are now recorded in a new relationship table, DMH_SQL_CURSOR_TAB. During migration, programs found to contain EXEC SQL DECLARE CURSOR statements will be queued to be analyze again.. After running the Analysis queue processor, the user interface is expected to reflect EXEC SQL FETCH statement references to the 1 - n DB2 tables comprising the cursor. Following migration, but before running the Analysis queue, these DB2 table references from cursors will not be visible in the user interface.
If the database was manually migrated to v554 to include an interim development driver to resolve the issue described above, you might receive the following warning message specific to the DMHTRCU1 trigger. If you are migrating to v600, you can ignore this warning. The DMHTRCU1 trigger will be replaced when the migration tasks for v600 are performed.
Comparison of current database schema with expected schema at this particular level showed 1 differences. 1: Trigger target tables on 'DMHTRCU1' was expected to be 'DMH_ACTIVITY_LOG-D,DMH_ACTUAL_PARM-D,DMH_ANALYSIS_QUEUE-D,DMH_CHANGE_FOCU S-D,DMH_CHANGE_IMPACT-D,DMH_LINK_SOURCE-D,DMH_LINK_TARGET-D,DMH_CONTROL_XFER-D,DMH_CU_CHAR_ATTR-D,DMH_CU_DATA_USE-D,DMH_CU_INCLUDE-D ,DMH_CU_LABEL-D,DMH_CU_NUM_ATTR-D,DMH_CU_STMT-D,DMH_DATA_ELMT_ALAS-D,DMH_DATA_RECORD-D,DMH_DATA_STORE-D,DMH_DATA_STORE_DEA-D,DMH_ENT RY_POINT-D,DMH_FORMAL_PARM-D,DMH_LIT_FOR_DEA-D,DMH_SQL_COLUMN-D,DMH_SQL_CURSOR-D,DMH_SQL_CURSOR_TAB-D,DMH_SQL_TABLE-D,DMH_SQL_PROC-D ,DMH_RUN_UNIT_CNTNT-D,DMH_RUN_UNIT_CNTNT-U,DMH_RUN_UNIT_CNTNT-U' but was actually 'DMH_ACTIVITY_LOG-D,DMH_ACTUAL_PARM-D,DMH_ANALYSIS _QUEUE-D,DMH_CHANGE_FOCUS-D,DMH_CHANGE_IMPACT-D,DMH_LINK_SOURCE-D,DMH_LINK_TARGET-D,DMH_CONTROL_XFER-D,DMH_CU_CHAR_ATTR-D,DMH_CU_DAT A_USE-D,DMH_CU_INCLUDE-D,DMH_CU_LABEL-D,DMH_CU_NUM_ATTR-D,DMH_CU_STMT-D,DMH_DATA_ELMT_ALAS-D,DMH_DATA_RECORD-D,DMH_DATA_STORE-D,DMH_ DATA_STORE_DEA-D,DMH_ENTRY_POINT-D,DMH_FORMAL_PARM-D,DMH_LIT_FOR_DEA-D,DMH_SQL_COLUMN-D,DMH_SQL_CURSOR-D,DMH_SQL_TABLE-D,DMH_SQL_PRO C-D,DMH_RUN_UNIT_CNTNT-D,DMH_RUN_UNIT_CNTNT-U,DMH_RUN_UNIT_CNTNT-U'. These differences could indicate custom changes made to the database. These differences should be investigated before proceeding with the migration plan. Proceed creating migration plan? Continue? {Y/N}
If you are not migrating to v600, you should drop and create the trigger again as described in the installation instructions provided along with the development driver.
Table | Primary Key Index | Alternate Index |
---|---|---|
DMH_SQL_CURSOR_TAB | COMP_UNIT_ID + SQL_CURSOR_ID + SQL_TABLE_ID |
Table | |
---|---|
DMH_SYSTEM_ATTR | 500, 554 |
In previous versions of Rational Asset Analyzer, metadata produced by the MVS and distributed scanners were populated in the database under different Sites, even if the two scanners were scanning the same directory structure. This resulted in two separate and distinct sets of Containers and Files. Beginning in v600, inventory collection for MVS and distributed assets has been merged into a single process utilizing the same wizard. This approach produces a single set of Containers and Files, provides information about MVS and distributed inventory to be presented in a unified fashion, and significantly simplifies the database model. In addition, this approach also eliminates previously required manual steps for distributed scanning and scanners.
The migration utility does not merge the two sets of Containers and Files. Therefore, all existing distributed asset metadata in the database will be dropped. The original scan roots will be detected and queued again for analysis. After running the Analysis queue processor, the J2EE asset data is expected to be in the same state as before migration.
Additional considerations:
For any Sites already in the inventory, the value for the HTTP_IP_NAME column of the DMH_SITE table should be reviewed to validate it properly reflects the scanner daemon URL. In prior releases of Rational Asset Analzyer, the user interface allowed setting this column to any value a user wanted. As a result, there may be a non-blank value in the column that is not relevant in V600. You must first update the HTTP_IP_NAME column(s) before starting the Analysis queue processor for re-scanning the distributed assets. In some cases, this might be a simple update to change the port number from http://somehost:9080 to http://somehost:8080. In other cases, you may want to set the HTTP_IP_NAME to an empty string because the Site will not be used for scanning distributed assets. These changes can be accomplished using the SQL UPDATE statement, for example, UPDATE XXX.DMH_SITE SET HTTP_IP_NAME. Refer to the DB2 SQL reference guides for further information about syntax, rules, and examples.
During migration to V600, all existing metadata and database objects for WebSphere Application Server assets are dropped.
The database objects for Business Rule Mining have changed since their original introduction in the Technology Preview. There is no migration path for the Technology Preview. Therefore, during migration to V600, all existing business rule objects in the database will be dropped.
As groundwork for future expansion and persistence of additional user preferences and settings, the DMH_USER_CHAR_ATTR and DMH_USER_SETTING tables have been dropped from the data model. During migration, user names from the DMH_USER_CHAR_ATTR table will be assigned an ID and inserted into the DMH_USER table. Corresponding user attributes stored in the DMH_USER_CHAR_ATTR table will be moved to the DMH_USER_ATTR table, which has a foreign key to the DMH_USER table.
Review the data model changes for V600. They are presented below in Summary of Changes. Following migration, inspect the Is valid column on the Custom query summary page, as this information indicates if the corresponding custom query contains valid SQL for the current Rational Asset Analyzer database.
Table Primary Key index Alternate indexes | Table Primary Key index Alternate indexes | Table Primary Key index Alternate indexes |
---|---|---|
DMH_AMF_CLASSPATH | DMHAR1S1 | |
DMH_APPL_CONTAINER | ||
DMH_APPL_FILE | ||
DMH_BR_CAT_RULE | ||
DMH_BRM_CATEGORY | ||
DMH_BT_CAT_TERM | ||
DMH_BT_FACT | FACT_ID | |
DMH_BYTECODE_ROOT | CONTAINER_ID | |
DMH_CONTAINER_TC | RDICTC2, RDICTC3 | |
DMH_EAR_EJBJARS | DMHEA2S1, DMHEA2S2 | |
DMH_EAR_JEECLIENTS | DMHEA3S1, DMHEA3S2 | |
DMH_EAR_WARS | DMHEA1S1, DMHEA2S2 | |
DMH_HIERARCHY_JBC | ||
DMH_JAR | CONTAINER_ID | |
DMH_LINKS_C_POS | DMHLK5S1 | |
DMH_REFS_JBC | DMHRF1S1, DMHRF1S2 | |
DMH_REFS_JBF | DMHRF2S1, DMHRF2S2 | |
DMH_REFS_JBM | DMHRF3S1, DMHRF3S2 | |
DMH_REFS_JNDI | DMHRF5S1, DMHRF5S2 | |
DMH_REFS_URL | DMHRF4S1, DMHRF4S1 | |
DMH_USER_ATTR | USER_ID + ATTR_TYPE_ID | |
DMH_WS_JAXRPC | FILE_ID |
Table | Code or ID, and Description string |
---|---|
DMH_ATTR_NAME |
|
DMH_CMPNT_TYPE | 402, Business term property |
DMH_LINK_TYPE |
|
DMH_QUEUE_TYPE |
|
DMH_USER | 0, Unknown |
Tables listed below have undergone modifications to their definitions, such as added, changed, deleted, and renamed columns or primary keys. The table itself may have been renamed, as indicated, and/or had indexes added, changed, and dropped.
Tables | New table name | New and changed indexes | Dropped indexes |
---|---|---|---|
DMH_ANALYSIS_QUEUE | Primary Key | ||
DMH_ARCHIVE | Primary Key, DMHARCS1 | ARSI001 | |
DMH_ARCH_MANIFEST | DMHARMP1 | ||
DMH_BR_RULE | RDIBRU1 | ||
DMH_BT_TERM | RDIBTT2, RDIBTT3 | ||
DMH_BYTECODE_CLASS | Primary Key, DMHCLFS1, DMHCLFS2, DMHCLFS3 | CLSI001 | |
DMH_CONTAINER | RDILIB2, RDILIB3, RDILIB6, RDILIB7 | ||
DMH_CPP | Primary Key, DMHCPPS1 | ||
DMH_CPP_DEF_MACROS | DMHCP4S1 | CPSI004 | |
DMH_CPP_GLBL_FUNCS | DMHCP6S1 | CPSI006 | |
DMH_CPP_GLBL_VARS | DMHCP5S1 | CPSI005 | |
DMH_CPP_INC_FILES | DMHCP3S1 | CPSI003 | |
DMH_CPP_MBMR_FUNCS | DMHCP2S1 | CPSI002 | |
DMH_CPP_MBMR_VARS | DMHCP1S1 | CPSI001 | |
DMH_CRAWL_ROOT | DMH_SCAN_ROOT | Primary Key | |
DMH_CRAWL_ROOT_DAT | DMH_SCAN_ROOT_DAT | DMHCR1S1 | CRSI002 |
DMH_DB2_PROC | |||
DMH_EAR | Primary Key, DMHEARS1 | EASI001, EASI002 | |
DMH_EJB | Primary Key, DMHEJBS1 | ||
DMH_EJBJAR | Primary Key, DMHEJJS1 | EBSI001, EBSI002 | |
DMH_EJB_ENTITY | Primary Key | ||
DMH_EJB_ENV_ENTRYS | DMHEJ1S1 | EJSI001 | |
DMH_EJB_LOCAL_REF | DMHEJ0S1 | EJPI003 | |
DMH_EJB_MSG_DRVN | Primary Key | ||
DMH_EJB_QUERY | DMHEJ4S1 | EJPI005 | |
DMH_EJB_REF_NHRTL | DMH_EJB_REF | DMHEJ2S1 | EJSI002 |
DMH_EJB_RELATION | DMHEJ8S1 | EJPI006 | |
DMH_EJB_RSRC_ENVS | DMHEJ9S1 | EJPI007 | |
DMH_EJB_RSRC_REFS | DMHEJ3S1 | EJSI003 | |
DMH_EJB_SESSION | Primary Key | ||
DMH_EVENT_LISTENER | Primary Key, DMHEVTS1 | EVLI001 | |
DMH_FIELD_BYTECODE | DMH_JBC_FIELD | Primary Key, DMHFLDS1, DMHFLDS2 | FISI001 |
DMH_FILE | RDIMEM4 | ||
DMH_FILTER | Primary Key, DMHFL1S1 | FLTI001 | |
DMH_FLTR_PARAMS | DMHFL2S1 | FLTI002 | |
DMH_HTML | Primary Key | HTSI001 | |
DMH_HTML_HEADINGS | DMHHT2S1 | HTSI003 | |
DMH_J2EECLIENT | DMH_JEECLIENT | Primary Key, DMHJ2ES1 | |
DMH_JAVA | Primary Key, DMHJAVS1 | ||
DMH_JAVA_EXCEPTION | DMHJV1S1 | JASI001 | |
DMH_JAVA_FIELDS | DMHJV2S1 | JASI002 | |
DMH_JAVA_IMPORTS | DMHJV3S1 | JASI003 | |
DMH_JAVA_METHODS | DMHJV4S1 | JASI004 | |
DMH_JAVA_MTHD_INVK | DMHJV5S1 | JASI005 | |
DMH_JC_ENV_ENTRYS | DMHJ21S1 | JCSI001 | |
DMH_JC_REF_NHRTL | DMH_JC_EJB_REF | DMHJ22S1 | JCSI002 |
DMH_JC_RSRC_REFS | DMHJ23S1 | JCSI003 | |
DMH_JSP | Primary Key | JSSI001 | |
DMH_JSP_BEANS | DMHJS1S1 | JSSI002 | |
DMH_JSP_HEADINGS | DMHJS4S1 | JSSI005 | |
DMH_JSP_JAVA_FLD_T | DMHJS8S1 | JSSI009 | |
DMH_JSP_JAVA_IMPRT | DMHJS7S1 | JSSI008 | |
DMH_JSP_METHOD_USE | DMHJS9S1 | JSSI010 | |
DMH_JSP_TAGLIBS | DMHJSAS1 | JSSI011 | |
DMH_LINKS_C | DMHLK2S1, DMHLK2S2 | LCSI001 | |
DMH_LINKS_H | DMHLK3S1, DMHLK3S2 | LHSI001 | |
DMH_LINKS_R | DMHLK4S1, DMHLK4S2 | LRSI001 | |
DMH_METHOD_BYTECDE | DMH_JBC_METHOD | Primary Key, DMHMETS1, DMHMETS2 | MESI001 |
DMH_PACKAGE | Primary Key, DMHPAQS1 | ||
DMH_RAR | Primary Key, DMHRR1S1 | ||
DMH_RAR_CFG_PROPS | DMHRR2S2 | RARI002 | |
DMH_SERVLET | Primary Key, DMHSETS1 | ||
DMH_SQL_CURSOR | RDISQU1 | ||
DMH_SRVLT_PARAMS | DMHSE1S1 | SESI001 | |
DMH_SRVLT_URL_PTRN | DMHSE2S1 | SESI002 | |
DMH_TAG | Primary Key, DMHTAGS1 | TGSI001 | |
DMH_TAG_ATTRS_NAME | DMH_TAG_ATTRS | DMHTG1S1 | TGSI002 |
DMH_TAG_VRBL | DMHTG2S1 | TGSI003 | |
DMH_TAGLIB | Primary Key | TLSI001 | |
DMH_TAGLIB_VLDTR | Primary Key, DMHVL1S1 | ||
DMH_VLDTR_PARAM | DMHVL2S1 | TLPI004 | |
DMH_W_EJB_RF_NHRTL | DMH_WAR_EJB_REF | DMHWR2S1 | WRSI004 |
DMH_WAR | Primary Key, DMHWARS1 | WRSI001, WRSI002 | |
DMH_WAR_ENV_ENTRYS | DMHWR1S1 | WRSI003 | |
DMH_WAR_LCL_EJBREF | DMHWR4S1 | WRSI006 | |
DMH_WAR_RSRC_REFS | DMHWR3S1 | WRSI005 | |
DMH_WS_ENDPOINT | Primary Key, DMHWSES1 | WSEI001, WSEI002 | |
DMH_WS_HANDLER | DMHWSHS1 | ||
DMH_WS_MTD_MAP | DMHWMMS1 | ||
DMH_WS_PKG_MAP | DMHWPMS1 | ||
DMH_WS_PORTCOM | DMHWSPS1 | WSPC001 | |
DMH_WS_SEI_MAP | Primary Key, DMHWSIS1 | WEII001 | |
DMH_WS_SRV_MAP | DMHWSMS1 | WRVI001 | |
DMH_WS_TYP_MAP | DMHWTMS1 | WTPM001 | |
DMH_WSDL | Primary Key | ||
DMH_WSDL_BIND | Primary Key, DMHWDBS1 | ||
DMH_WSDL_IMPORT | DMHWDIS1 | WDII001 | |
DMH_WSDL_OPR | DMHWDOD1 | ||
DMH_WSDL_PORT | Primary Key, DMHWDPS1 | WDPO001, WDPO002 | |
DMH_WSDL_PORTTYPE | Primary Key, DMHWDRS1 | ||
DMH_WSDL_SERVICE | Primary Key, DMHWDSS1 | WSRI001, WSRI002 | |
DMH_WSDL_TYPE | Primary Key, DMHWDYS1 | WTPI001 | |
DMH_XML | Primary Key | ||
DMH_XML_ATTLIST | DMHXM1S1 | XMSI001 | |
DMH_XML_ELEMENTS | DMHXM2S1 | XMSI002 |
View | Modifications |
---|---|
DMH_LIBRARY | Add column DMH_CONTAINER.HASH_ID |
DMH_MEMBER | Add column DMH_FILE.HASH_ID |
Table | Code or ID | String was | String is |
---|---|---|---|
DMH_CMPNT_TYPE | 400 | Category | Business category |
DMH_CMPNT_TYPE | 401 | Term | Business term |
DMH_CMPNT_TYPE | 430 | Model | Business process model |
DMH_CMPNT_TYPE | 431 | Process | Business activity |
DMH_CMPNT_TYPE | 480 | Rule | Business rule |
DMH_LANGUAGE | BIN | binary | Binary |
DMH_LANGUAGE | EMP? | no records | No records |
DMH_LANGUAGE | GONE | not found | Not found |
DMH_LANGUAGE | UNKN (was UNK?) | unknown | Unknown |
DMH_SYSTEM_ATTR | 500 | 54 | 600 |
Tables | Index(es) |
---|---|
DMH_ASSETS | Primary Key, ASSI001, ASSI002 |
DMH_CLSFIL_USDCLSS | CLSI002 |
DMH_CL_DPL_EAR | DEPI003 |
DMH_CL_DPL_EJB_M | DEPI004 |
DMH_CL_DPL_WEB_M | DEPI005 |
DMH_CL_W4AAPPSVR | WAPI002 |
DMH_CVSS | Primary Key |
DMH_DEJB_RESENVREF | DMSI004 |
DMH_DPL_EAR | Primary Key, DESI001 |
DMH_DPL_EAR_LOC | DEPI002 |
DMH_DPL_EJB | Primary Key |
DMH_DPL_EJB_EJBREF | DMSI002 |
DMH_DPL_EJB_MODULE | Primary Key |
DMH_DPL_EJB_RESREF | DMSI003 |
DMH_DPL_WEB_EJBREF | DWSI002 |
DMH_DPL_WEB_MODULE | Primary Key |
DMH_DPL_WEB_RESREF | DWSI003 |
DMH_DWEB_RESENVREF | DWSI004 |
DMH_D_EJB_MDL_IIL | DMSI001 |
DMH_D_WEB_MDL_IIL | DWSI001 |
DMH_FLTRMAP | Primary Key |
DMH_GENERIC | Primary Key |
DMH_HTML_FREFS | HTSI002 |
DMH_HTML_HYPER_LNK | HTSI004 |
DMH_JP_PATTERN | Primary Key |
DMH_JSP_FREFS | JSSI004 |
DMH_JSP_FWRD_FILES | JSSI003 |
DMH_JSP_HYPER_LNKS | JSSI006 |
DMH_JSP_INCL_FILES | JSSI007 |
DMH_M_BCD_CLS_RFS | MESI003 |
DMH_M_BCD_FLD_RFS | MESI004 |
DMH_M_BCD_MTHD_RFS | MESI005 |
DMH_PTRN_DETECTOR | Primary Key |
DMH_RE_DAT | RESI003 |
DMH_RE_JP | RESI004, RESI005 |
DMH_RUNTIME_ENV | Primary Key, RESI001, RESI002 |
DMH_SCOPE_SPACE | Primary Key, SSSI001, SSSI002 |
DMH_SCOPE_TO_ASSET | Primary Key, SASI001 |
DMH_SPACE | Primary Key |
DMH_SRVLMAP | Primary Key |
DMH_SYS_INFO | Primary Key |
DMH_TAGLIB_LSTNR | TLPI002 |
DMH_TEXT | Primary Key |
DMH_UNFOUND_TARGET | Primary Key |
DMH_URLMAP | Primary Key, UMSI001 |
DMH_USER_CHAR_ATTR | Primary Key |
DMH_USER_SETTING | Primary Key |
DMH_W4AAPPSVR | Primary Key |
DMH_W4AECELL | Primary Key |
DMH_W4AECLONE | Primary Key |
DMH_W4AEDATASOURCE | Primary Key |
DMH_W4AEJ2CPADPLOC | WALI001 |
DMH_W4AEJDBCDRIVER | Primary Key |
DMH_W4AEJDBCDVRLOC | WJDI001 |
DMH_W4AEJMSPROVIDR | Primary Key |
DMH_W4AEJMSPRVRLOC | WPSI001 |
DMH_W4AEMAILSESSON | Primary Key |
DMH_W4AENODE | Primary Key |
DMH_W4AESVRGROUP | Primary Key |
DMH_W4AESVRGROUP_M | WEPI002 |
DMH_W4AEURL | Primary Key |
DMH_W4AEURLPROVIDR | Primary Key |
DMH_W4AEVIRTUALHST | Primary Key |
DMH_W4AEVRTLHSTAL | WHPI002 |
DMH_W4AGENSVR | Primary Key |
DMH_W4AJ2CCONNFACT | Primary Key |
DMH_W4AJ2CRESADPTR | Primary Key |
DMH_W4AJMSCONNFACT | Primary Key |
DMH_W4AJMSDEST | Primary Key |
DMH_W4URLPRVDR_LOC | WIPI002 |
Table | Code or ID, and Description string |
---|---|
DMH_CMPNT_TYPE |
|
DMH_LANGUAGE |
|
DMH_QUEUE_TYPE |
|
Before running the migration utility, you should increase the statement heap size on the database by making the following call: db2 update db cfg for database_name using stmtheap 60000. DB2 SYSADM or equivalent authority is required to update the database configuration.
The JCL analyzer has been modified to eliminate building unnecessary relationship links between Job step and Data set assets when a dummy data set is specified on a DD statement. As a result, during migration, any Batch jobs in the inventory found to be using DSNAME=NULLFILE and/or DSN=NULLFILE will be queued for analysis again. Post-migration, no row will exist for DMH_DATASET.DATASET_NAME = NULLFILE, and the DMH_DD_CONTAT.DATASET_ID will be zero for those job step DDs formerly pointing to NULLFILE. Post-migration user activity is required to run the Analysis queue processor to analyze the effected Batch jobs again.
For cases where Run units were manually deleted from the inventory (for example, not through the user interface Action link), there arises a condition where rows in the DMH_RUN_UNIT_CNTNT table reference Run unit IDs that no longer exist in the inventory. This causes the Run units with missing source count to be incorrect and misleading. To resolve this condition, the Migration Utility will drop the existing component delete trigger (for example, DMHTRRN1) and re-create it to provide explicit delete behavior for the DMH_RUN_UNIT_CNTNT table. Also during migration, any rows found in the DMH_RUN_UNIT_CNTNT table that do not have a corresponding RUN_UNIT_ID row in the DMH_RUN_UNIT table will be removed.
No user activity is required for this update.
For certain distributed asset types, such as enterprise application, web application, EJB-JAR, J2EE client, and J2EE RAR file, annotation text was sometimes incorrectly related to a Container asset rather than a File asset. During migration, the annotation text is updated so that it is correctly related to the File. If there is already annotation text of the same type related to the File, the one incorrectly related to the Container will be deleted.
No user activity is required for this update.
The DMH_PROJECT_TYPE table has been updated to collapse all the previous Impact analysis type rows into a single type (for example, 1100), and to introduce a new type for Business rule mining projects (for example, 1200). During migration, any Impact analysis projects found to be using PROJECT_TYPE_ID 1000, 1001, 1002, 1003, 1004, 1039 or 1040 will be updated to use PROJECT_TYPE_ID 1100.
Post-migration user activity is required to manually update any Custom queries that reference the assetType="Project" to instead use assetType="IaProject".
A User details page has been added to RAA and this page allows activation of a Business rule mining (BRM) project. The User details page is accessed by clicking on the down arrow next to the user name in the upper-right of the browser page, then selecting View my profile.
Once a user has selected a BRM project, the BRM project will be indicated as the active project for the user. To enable this capability, a new attribute (ATTR_TYPE_ID = 726, DESCRIPTION=Business rules mining project to user relationship) has been inserted into the DMH_ATTR_NAME table for use in representing project-user relationships. For example, an entry in the DMH_USER_ATTR table, with ATTR_TYPE_ID = 726 and pointing to the user's DMH_USER row, is created to indicate the particular relationship.
Once an active project has been established by a user, the user can add Business term properties to the BRM project by choosing Actions -> Include in active BRM project from the Business term property details page. The BRM project will then show all the Business rules that use these Business term properties in the Business rule mining project details page. These rules can then be exported to iLog JRules or JRules for COBOL from the Actions menu.
A new relationship table, DMH_BR_RULE_FACT, has been added tot he data model for storing the relationship between a Business rule and the Business term properties used by that rule. This allows RAA to determine which Business rules belong to a given Business rule mining project. During migration, a status indicator row will be inserted into the DMH_SYSTEM_ATTR table to signal an automatic refresh of the DMH_BR_RULE_FACT table. The data refresh will occur during post-migration RAA start-up initialization.
No user activity is required for these updates.
During migration, all existing meta data and database objects for Java source scanning will be dropped. Data related to Java bytecode scanning is unaffected.
No user activity is required for this update.
A new table, DMH_ANALYSIS_PARMS, has been added to the data model. This table will augment the DMH_ANALYSIS_QUEUE.ANALYSIS_PARMS column when analysis parameters are too large to fit into the VARCHAR(80) column, such as Container scan load paths on type 12 Analysis queue records.
No user activity is required for this update.
Two new tables, DMH_ASSET_FILE and DMH_ASSET_CONT, have been added to the data model. These tables will be used to:
No user activity is required for this update.
A new type 22 Analysis queue record has been introduced to the data model, and is a place-holder for future functionality. The intent is to search for data abnormalities in the database, and report and/or repair them as deemed appropriate.
No user activity is required for this update
Table | Table space | Indexes |
---|---|---|
DMH_ANALYSIS_PARMS | &dmhtsqul.ANP | RDIANP2 |
DMH_ASSET_CONT | &dmhtsqul.ASC | DMHASCS1 |
DMH_ASSET_FILE | &dmhtsqul.ASF | DMHASFS1 |
DMH_BR_RULE_FACT | &dmhtsqul.BRF |
RDIBRF1 RDIBRF2 |
DMH_PROJECT | &dmhtsqul.PRJ | RDIPRJ1 |
DMH_PROJECT_FOCUS | &dmhtsqul.PRF | RDIPRF1 |
The following tables list new triggers and trigger packages:
Triggers | Triggering event | Table |
DMHTRACD DMHTRACI DMHTRACU |
After delete from After inset into After update of |
DMH_ASSET_CONT |
DMHTRAFD DMHTRAFI DMHTRAFU |
After delete from After insert into After update of |
DMH_ASSET_FILE |
DMHT103D DMHT103I DMHT103U |
After delete from After insert into After update of |
DMH_JBC_FIELD |
DMHT104D DMHT104I DMHT104U |
After delete from After insert into After update of |
DMH_JBC_METHOD |
DMHT105D DMHT105I DMHT105U |
After delete from After insert into After update of |
DMH_CPP |
DMHT107D DMHT107I DMHT107U |
After delete from After insert into After update of |
DMH_EJB |
DMHT109D DMHT109I DMHT109U |
After delete from After insert into After update of |
DMH_FILTER |
DMHT116D DMHT116I DMHT116U |
After delete from After insert into After update of |
DMH_TAG |
DMHT119D DMHT119I DMHT119U |
After delete from After insert into After update of |
DMH_SERVLET |
DMHT120D DMHT120I DMHT120U |
After delete from After insert into After update of |
DMH_EVENT_LISTENER |
DMHT121D DMHT121I DMHT121U |
After delete from After insert into After update of |
DMH_TAGLIB_VLDTR |
DMHT124D DMHT124I DMHT124U |
After delete from After insert into After update of |
DMH_USERINPUTASSET |
DMHT127D DMHT127I DMHT127U |
After delete from After insert into After update of |
DMH_USERINPUTLINK |
DMHT130D DMHT130I DMHT130U |
After delete from After insert into After update of |
DMH_WSDL_SERVICE |
DMHT131D DMHT131I DMHT131U |
After delete from After insert into After update of |
DMH_WSDL_BIND |
DMHT132D DMHT132I DMHT132U |
After delete from After insert into After update of |
DMH_WSDL_PORTTYPE |
DMHT133D DMHT133I DMHT133U |
After delete from After insert into After update of |
DMH_WSDL_TYPE |
DMHT134D DMHT134I DMHT134U |
After delete from After insert into After update of |
DMH_WDSL_PORT |
DMHT140D DMHT140I DMHT140U |
After delete from After insert into After update of |
DMH_WS_SEI_MAP |
Trigger | Triggering event | Table |
---|---|---|
DMHASFC1 |
After delete from |
DMH_ASSET_CONT |
DMHASFD1 |
After delete from |
DMH_ASSET_FILE |
DMHEVTD1 |
After delete from |
DMH_EVENT_LISTENER |
DMHFLDD1 |
After delete from |
DMH_JBC_FIELD |
DMHMETD1 |
After delete from |
DMH_JBC_METHOD |
DMHTRQP1 |
After delete from |
DMH_ANALYSIS_QUEUE |
DMHWDPD1 |
After delete from |
DMH_WSDL_PORT |
DMHWDSD1 |
After delete from |
DMH_WSDL_SERVICE |
DMHWDYD1 |
After delete from |
DMH_WSDL_TYPE |
DMHTRPT1 | After delete from | DMH_PROJECT |
DMHTRPT2 | After delete from | DMH_PROJECT |
Table | ID and Description string |
---|---|
DMH_ATTR_NAME |
300, Impact level 301, Scope indicator 302, Exclude flag 303, Range end line 304, Range start line 726, Business rules mining project to user relationship 740, Initialize DMH_BR_RULE_FACT table |
DMH_CMPNT_TYPE |
58, Business rule mining project 59, Project focus row 71, User |
DMH_INCL_TYPE |
10, Import |
DMH_PROJECT_TYPE |
1200, Business Rule Mining |
DMH_QUEUE_TYPE |
22, Data integrity check |
The Table 18 lists tables have undergone modifications to their definitions, such as added, changed, and deleted columns. The table itself may have Indexes added and changed, and/or now be declared as VOLATILE.
Table | New and modified columns | Dropped columns | New and modified indexes | Volatile |
---|---|---|---|---|
DMH_ATTR_NAME |
NAME |
|||
DMH_BYTECODE_CLASS | x | |||
DMH_BR_RULE |
DESCRIPTION DEFINITION |
|||
DMH_BT_TERM |
DIMENSION IMPLEMENTATION |
|||
DMH_FILE_SYMBOL | x | |||
DMH_HIERARCHY_JBC |
DMHHBCS1 |
x | ||
DMH_JBC_FIELD | x | |||
DMH_JBC_METHOD | x | |||
DMH_LINK_DATA |
LINK_TYPE_ID |
|||
DMH_LINK_DATA_L |
LINK_TYPE_ID |
|||
DMH_LINKS_C |
DMHLK2S3 |
x | ||
DMH_LINKS_C_POS |
P_CMPNT_TYPE_ID P_ID |
DMHLK5S2 |
x | |
DMH_LINKS_H |
DMHLK3S3 |
x | ||
DMH_LINKS_R |
DMHLK4S1 DMHLK4S2 DMHLK4S3 DMHLK4S4 |
x | ||
DMH_LINKS_TC |
APPLICATION_ID |
LTSI001 LTSI002 LTSI003 LTSI004 |
x | |
DMH_PROJECT_TYPE |
PROJECT_TYPE_ID |
|||
DMH_REFS_JBC | x | |||
DMH_REFS_JBF | x | |||
DMH_REFS_JBM | x | |||
DMH_SYMBOL | x |
Table | ID | Was value | Is value |
---|---|---|---|
DMH_ATTR_NAME |
41 87 |
CALL PLIDLI USER SCHEMA |
CALL PLITDLI IMS MFS |
DMH_CMPNT_TYPE |
28 |
Impact analysis |
Impact analysis project |
DMH_PROJECT_TYPE |
1100 |
Impact Analysis - |
Impact Analysis |
DMH_QUEUE_TYPE |
17 |
Delete file |
Delete asset |
DMH_SYSTEM_ATTR |
500 |
600 |
601 |
Trigger | Triggering event | Added actions | Removed actions |
---|---|---|---|
DMHCPPD1 |
After delete from DMH_CPP |
Delete from
DMH_CHANGE_IMPACT DMH_LINK_SOURCE DMH_LINK_TARGET |
|
DMHCRWD1 |
After delete from DMH_SCAN_ROOT |
Delete from DMH_ANALYSIS_QUEUE |
|
DMHEJBD1 |
After delete from DMH_EJB |
Delete from
DMH_CHANGE_FOCUS DMH_CHANGE_IMPACT DMH_LINK_SOURCE DMH_LINK_TARGET |
|
DMHFL1D1 |
After delete from DMH_FILTER |
Delete from
DMH_CHANGE_IMPACT DMH_LINK_SOURCE DMH_LINK_TARGET |
|
DMHJSPD1 |
After delete from DMH_JSP |
Delete from
DMH_JSP_JAVA_IMPORT DMH_JSP_JAVA_FLD_T DMH_JSP_METHOD_USE |
|
DMHSETD1 |
After delete from DMH_SERVLET |
Delete from
DMH_CHANGE_FOCUS DMH_CHANGE_IMPACT DMH_LINK_SOURCE DMH_LINK_TARGET |
|
DMHTAGD1 |
After delete from DMH_TAG |
Delete from
DMH_CHANGE_FOCUS DMH_CHANGE_IMPACT DMH_LINK_SOURCE DMH_LINK_TARGET |
|
DMHTRAP1 |
After delete from DMH_APPLICATION |
Delete from
DMH_LINKS_C DMH_LINKS_C_POS DMH_LINKS_H DMH_LINKS_R |
|
DMHTRLI1 |
After delete from DMH_CONTAINER |
Delete from
DMH_ARCHIVE DMH_SCAN_ROOT |
|
DMHTRRN1 |
After delete from DMH_RUN_UNIT |
Delete from DMH_RUN_UNIT_CNTNT |
|
DMHVL1D1 |
After delete from DMH_TAGLIB_VLDTR |
Delete from
DMH_CHANGE_IMPACT DMH_LINK_SOURCE DMH_LINK_TARGET |
|
DMHWSED1 |
After delete from DMH_WS_ENDPOINT |
Delete from
DMH_CHANGE_IMPACT DMH_LINK_SOURCE DMH_LINK_TARGET |
|
DMHTRME1 | After delete from DMH_FILE | Delete from DMH_ASSET_FILE |
Table | Table space | Indexes |
---|---|---|
DMH_JAVA |
&dmhtsqul.JAV |
DMHJAVP1 DMHJAVS1 |
DMH_JAVA_EXCEPTION |
&dmhtsqul.JV1 |
DMHJV1S1 |
DMH_JAVA_FIELDS |
&dmhtsqul.JV2 |
DMHJV2S1 |
DMH_JAVA_IMPORTS |
&dmhtsqul.JV3 |
DMHJV3S1 |
DMH_JAVA_METHODS |
&dmhtsqul.JV4 |
DMHJV4S1 |
DMH_JAVA_MTHD_INVK |
&dmhtsqul.JV5 |
DMHJV5S1 |
DMH_JSP_JAVA_FLD_T |
&dmhtsqul.JS7 |
DMHJS7S1 |
DMH_JSP_JAVA_IMPRT |
&dmhtsqul.JS8 |
DMHJS8S1 |
DMH_JSP_METHOD_SE |
&dmhtsqul.JS9 |
DMHJS9S1 |
Auxiliary table | Base table | LOB table space | Index |
---|---|---|---|
JAAT002 | DMH_JAVA | JATS002 | JAUI002 |
Table | ID and Description string |
---|---|
DMH_CMPNT_TYPE |
113, Java source class |
Trigger | Table |
---|---|
DMHTR28D DMHTR28I DMHTR28U DMHTR38D DMHTR38I DMHTR38U |
DMH_CMPNT_TYPE |
The dbutil.bat command is used to create a migration plan, gather information, and run a migration plan. Run the dbutil.bat command from the migration utility directory. Ensure that the environment settings in the /config/User.cfg file are updated and accurate. To receive help for the command, run the command with no parameters.
To build the files that are used for input to the migration utility, use the following command:
dbutil.bat -createMigratePlan targetVersion
If the database is currently at v550 and the target version is v552, issue the following command:
dbutil.bat -createMigratePlan v552
and expect output similar to this:
JVM version detected: [#.#.#] Version: 550 OS: DB2_LUW Database version appears to be at level v550 ---------------------------------------------------------------------- Verifying current schema of database as 'v550' ---------------------------------------------------------------------- Current database version verified. ---------------------------------------------------------------------- Creating migration plan from current version 'v550' to target version 'v552'. ---------------------------------------------------------------------- Creating migration plan for version 'v552' Finished without errors.
The migration utility creates a migrate_plan directory, which contains XML files for each schema level that will be accommodated to get to migrate to the target version. For the above example, the following files are generated:
Migrate_v552.xml
If the database is at a particular schema level and you attempt to create a migration plan for the same schema level, no Migrate_yversion.xml file is created. For example, a database is currently at the v552 level and the following command is issued:
dbutil.bat -createMigratePlan v552
The following message is received, indicating that a migration plan was not created.
JVM version detected: [#.#.#] Database os detected as [UDB] JVM version detected: [#.#.#] Version: 552 OS: DB2_LUW Database version appears to be at level v552. Target version 'v552' is less than or equal to the current database version 'v552'. No plan created. Finished without errors.
Use the dbutil.bat -getInfo command to return information about the database.
The dbutil.bat -getInfo has the following format:
dbutil.bat -getInfo [ -args ]
The optional -args simply override what is defined in User.cfg file. If you want to get information about a different Rational Asset Analyzer database without having to edit User.cfg, use the -args parameters to specify the database connection information.
The values for the -args
Below is sample output obtained from issuing the dbutil.bat -getInfo command.
JVM version detected: [#.#.#] Url: jdbc:db2:<database-alias> Version: ### OS: DB2_LUW
Use the dbutil.bat -runMigratePlan command to perform the database migration based on a migration plan. The dbutil.bat -runMigratePlan has the following format:
dbutil.bat -runMigratePlan [ task_steps ]
At the completion of a -runMigratePlan run (in this case, migrating to v552), you should see the following output immediately after the last echoed SQL statement.
---------------------------------------------------------------------- Verifying current schema of database as 'v552' ---------------------------------------------------------------------- Current database version verified. Finished without errors.
The following examples, show how various specifications of the command can be used to perform step-by-step migration.
Use the dbutil.bat -verifySchema to verify that a particular Rational Asset Analyzer database contains the expected tables, columns, indexes, and triggers for a given schema level. A list of differences, if any, is displayed and identifies the missing or extra database objects (tables, columns, indexes, and triggers). It is recommended that you investigate and resolve the highlighted differences.
The dbutil.bat -verifySchema has the following format:
dbutil.bat -verifySchema version
The following example shows the output for verifying schema for the v552 level. Issuing the dbutil.bat -verifySchema v552 command produced the following output for a database at the v552 schema level:
JVM version detected: [#.#.#] Verifying database with the following properties: targetVersion = 'v552' dbUrl = 'jdbc:db2:<database-alias>' user = 'userid' pw = 'password' os = 'WINDOWS' tSchema = 'your_schema_name' dbDriver = 'COM.ibm.db2.jdbc.app.DB2Driver' isDistEnabled = 'true' Baseline comparison showed 0 differences. Finished without errors.
The following example shows the output issuing the same command for a database where there are two differences between the current database and the migration level. A message is displayed for each difference. All differences should be resolved before continuing with the migration process.
JVM version detected: [#.#.#] Verifying database with the following properties: targetVersion = 'v552' dbUrl = 'jdbc:db2:<database-alias>' user = 'userid' pw = 'password' os = 'WINDOWS' tSchema = 'your_schema_name' dbDriver = 'COM.ibm.db2.jdbc.app.DB2Driver' isDistEnabled = 'true' Baseline comparison showed 2 differences. 1: Database is missing index 'ASSI001' in table 'DMH_ASSETS' 2: Database is missing trigger 'DMHTRSB7' in table 'DMH_DB2_PROC'
An SQL Warning can be produced if there is no data from a table for a given task, such as an empty table, or zero rows. Generally, these types of warnings can be ignored.
The following examples, show
DELETE of row that does not exist.
DELETE FROM DMH.DMH_SYSTEM_ATTR WHERE SYS_ATTR_ID = 500 STATUS: WARNING (SQL-100)
INSERT from/to empty tables
INSERT INTO DMH.DMH_FLD_BTCD_NEW (ASSET_ID, PHYSICAL_ID, ACCESS, FREE_TEXT, NAME, FULL_NAME, TYPE) SELECT ASSET_ID, PHYSICAL_ID, ACCESS, CAST(FREE_TEXT as VARCHAR(500)), NAME, FULL_NAME, TYPE FROM DMH.DMH_FIELD_BYTECODE STATUS: WARNING
If a Task step in the migration fails, you are prompted to ignore the problem and continue, or to stop the migration utility.
If a Task step fails, do the following steps before continuing with the migration.
Then, restart the migration utility at the next migration step. For example, if the failure occurred in Task v312.10, start the migration at Task V312.11 by issuing the following command: dbutil.bat -runMigratePlan v312.11-.
Save the file and start the migration utility at the Task step that failed. For example, if the failure occurred in Task v312.10, start the migration at Task V312.10 by issuing the following command: dbutil.bat -runMigratePlan v312.10-.
The following is an example of a Task step failure and resolution. The dbutil.log below contained the following error
-------------------------------------------------------------------- [v410.31] - timestamp Delete some tables ---------------------------------------------------------------------- DROP TABLE DMH.DMH_MYTABLE1 STATUS: OK COMMIT STATUS: OK DROP TABLE DMH.DMH_MYTABLE2 [IBM][CLI Driver][DB2/NT] SQL0204N "DMH.DMH_MY_TABLE2" is an undefined name. SQLSTATE=42704 STATUS: ERROR Exception occurred while running the last command. Continue (Rollback occurs if 'N')? [Y(es)/N(o)] N
To determine the steps needed to get around the issue and restart the migration utility do the following steps:
dbutil.bat -runMigratePlan v410.32-
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service.