IBM Rational Asset Analyzer

Migration Utility Guide for System z

Version 1 Release 3

This edition applies to version 6, release 0, modification 0 of IBM Rational Asset Analyzer for System z and to all subsequent releases and modifications until otherwise indicated in new editions.

Copyright International Business Machines Corporation 2005, 2011.
US Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Contents

Introduction
Getting Started
Creating a migration plan
Migration plan considerations
Migration plan files
Implementing the migration plan
Completing the migration
Database schema changes
Database schema changes for v420
Database schema changes for v421
Database schema changes for v510
Database schema changes for v511
Database schema changes for v552
Database schema changes for v554
Database schema changes for v600
Database schema changes for v601
Appendix A. dbutil.sh command reference
Appendix B. Warnings from implementing a migration plan
Appendix C. Handling migration plan failures
Notices

Introduction

The Rational Asset Analyzer for System z database migration utility is a Java-based tool for migrating the Rational Asset Analyzer database to the latest schema level. Use the migration utility to preserve your data in the database after installing a new level of Rational Asset Analyzer.

While the migration utility is intended to be release independent, you should always use the newest version available from the product website in the event fixes have been made to older database migrations. Updated versions of the migration utility will always be able to perform older migrations.

There are two phases to complete the database migration using the migration utility.

Create migration plan
A migration plan consists of one or more generated XML files containing the SQL statements that are run to migrate the database to a specific schema level. After the plan is complete, the database administrator can review the scheme changes and make modifications before the plan is run.
Run the migration plan
Runs the migration plan against the Rational Asset Analyzer database to be migrated. You can specify if all tasks, a single task, or a range of tasks are to be performed during a particular migration run.

The migration utility has the following features:

Note:

Getting help

As you are planning your migration, you may have questions, concerns, or circumstances specific to your environment that you should resolve before implementing your migration plan. You can contact IBM Software Support for assistance. The IBM Rational Asset Analyzer forum is another method for asking questions and obtaining assistance. The forum is located at the following URL: http://www.ibm.com/developerworks/forums/forum.jspa?forumID=2046&cat=24

Getting Started

Before you download and start the migration process, verify that you have the required software and authority needed to run the migration utility.

  1. The migration utility does not create a backup copy of the Rational Asset Analyzer database. You should create a backup of your database before proceeding with the migration process.
  2. The Java runtime (JRE) version 1.5 or later is required. Issue the following command in OMVS to determine the JRE level in your environment.
     java -fullversion 
  3. You must have authority to create and edit UNIX System Services (USS) files. The migration utility file can be downloaded and extracted in any HFS directory where the user has read/write authority.
  4. The OMVS shell is configured for JDBC access such that the JDBC drivers are accessible in the current environment. The settings that are active in your environment can be verified by using the echo command, for example:
    echo $STEPLIB
    echo $LIBPATH
    echo $CLASSPATH
    echo $DB2SQLJPROPER
  5. You must have DB2 authority (for example, DBADM, SYSADM) or access to a user ID and password that has the authority to perform CREATE, UPDATE, DELETE, and other similar tasks against the database being migrated.
  6. You must have DB2 authority for reading the following DB2 SYSIBM catalog tables:
  7. You must be logged onto TSO with a large enough REGION size to run a Java application in OMVS (such as REGION=512000). If you encounter OutOfMemory exceptions, use a larger REGION size.
  8. The access path statistics for the database must be up-to-date. If in doubt, execute the DB2 RUNSTATS utility using &hlq.SDMHREXX(DMHINIT) Option 3.2.
  9. Verify that all user activity is completed. Also, ensure that the WebSphere Application Server and HTTP Server are stopped.

Downloading the migration utility

Complete the following steps to download the migration utility and complete changes required to use the utility.

  1. Create a directory in the OMVS shell for the migration utility files by using a command such as:
    mkdir /usr/lpp/dmh/raadbutil
  2. Copy the migration utility file into the new directory.
  3. Extract the migration utility file using the following commands:
    cd /usr/lpp/dmh/raadbutil
    tar xvf raadbutil_zos.tar
  4. Verify that the extraction was successful. The following files and directories should be in the directory:
    dbutil.sh
    MigUtil_win.htm
    MigUtil_zOS.htm
    SetupJdbc.sh
    
    /META-INF/*.mf
    /config/*.cfg
    /lib/*.jar
    
    /migrate/*.xsd
    ...
    /migrate/v601/*.properties and *.xml/
    /verify/v220/
    ...
    /verify/v601/*.xml
  5. Review the sections in this document that are specific to the release you are migrating for additional information.
  6. If the OMVS shell is not configured for JDBC access, the SetupJdbc.sh file can be used to set up the necessary environment variables. The settings that are active in your environment can be verified by using the echo command, for example:
    echo $STEPLIB
    echo $LIBPATH
    echo $CLASSPATH
    echo $DB2SQLJPROPERTIES
  7. Update the /config/User.cfg file to specify your environment and database settings. Comments are provided in the file to describe the parameter fields. You will be required to provide values for several parameters, including the following:

    Comments are provided in the User.cfg file to describe the parameter fields and provide sample values. In most cases, you can review the values for these fields currently being utilized for your installation in the following locations:

  8. After you have completed configuration, issue the following command from the migration utility directory to verify your settings:
    dbutil.sh -getInfo
    You should see output similar to the following:
    JVM version detected: [#.#.#]
    Url: jdbc:db2os390:your_DB2_location
    Version: ###
    OS: DB2_ZOS

Creating a migration plan

The dbutil.sh command is used to create a migration plan that will be run to make the database changes. Changes are not made to the database until the migration plan is executed.

To create the migration plan, issue the following command from the migration utility directory:

dbutil.sh -createMigratePlan targetVersion

See Appendix A. dbutil.sh command reference for details on specifying the dbutil.sh command.

After the create migration plan process completes, an XML file is created for each schema level between the starting and target versions. For example, if the starting version is v552 and target is v600, the migration will contain a plan for v554 and v600.

On z/OS, if the generated migration plan files are not output correctly, such as they are not readable, or part of the files such as translated strings appear unintelligible, the problem might be because of one of the following:

To fix this problem, you can manually specify the code set that is used on your system by setting the EXT_FILE_ENCODING variable in SetupJdbc.sh file.

Migration plan considerations

Before you create the plan, you should have a migration plan strategy and understand limitations associated with the migration utility.

To ensure a smooth migration, ensure that you understand the entire process before creating and running a migration plan. Review the entire migration utility instructions contained in this document. There are considerations for pre-migration, migration, and post-migration. It is best to be prepared for each step in advance. The Database schema changes topic contains details related to product changes that affect data. It also contains details about the database changes for each release.

Because the migration utility does not create a backup copy of the Rational Asset Analyzer database, there is no overall rollback feature. There are several intermittent commits that allow for some recovery. In case there are problems, you should create a backup of your database to recover data.

You can choose as a strategy to run the entire migration plan at once or it can be done in steps. A simple migration, such as dropping obsolete objects or creating a small set of new object is appropriate for running the entire migration at once. However, for more complex migrations, the step approach is a better choice. The following are examples of a complex migration:

The step approach makes it easier to recover and restart the process when an error occurs. When you run the migration plan in a step-by-step fashion, you can monitor DB2 resources in the environment for each step and fix any necessary issues. When using this approach, never run a Task step out of sequence.

Migration plan files

During the creation of a Migration Plan, the migration utility reads a version-specific /migrate/version/Migrate.xml file and creates an XML file for each version in the migration path.

The XML files are placed in a directory named migrate_plan. If a staging directory is specified in the config/User.cfg configuration file, the XML files are put in the staging directory.

The XML files are named Migrate_version.xml. Each migration XML file contains a list of SQL statements that will be executed by the migration utility.

Before continuing review and update these files to handle any environment-specific requirements, such as modify the SQL to handle especially large tables that might fill up the log file, update or add INDEX statements if you have performed customization.

Details for the /migrate/version/Migrate.xml file are described below for informational purposes only. Do not modify these files unless instructed by IBM support.

The /migrate/Migrate.xml files have the following format:

<TaskSequence version='version-number'...>       
 <!-- Define vars used in this file -->     
  <Vars>         
 ...zero or more variable definitions...       
</Vars>       
 ...zero or more task definitions...  
</TaskSequence>  

Example

The following is an example of a Migrate.xml file followed by a corresponding Migrate_version.xml file.

<TaskSequence version='v311'...>  	
<!-- Define vars used in this file --> 	
<Vars>
 		<Var name='dbName' value='$C{User.Database.name}'/>
 		<Var name='tschema' value='$C{$User.Database.tschema}'/>
 		<Var name='tsprefix' value='$C{$User.ZOS.tsprefix}'/> 
            ...
 	</Vars>

  	<Task class='Sql' desc='Create a new table.'>
 		CREATE TABLE ${tschema}.DMH_NEW_TABLE1 (
 			COLUMN_1            INTEGER NOT NULL,
 			COLUMN_2            INTEGER NOT NULL,
 			PRIMARY KEY (COLUMN_1))
 			%IF[${isZOS}]
 				IN ${tschema}.${tsprefix}%COND[${singleTS}==true?TBS:NEW]
 			%ENDIF
 		;
 	</Task>

  	<Task class='v311.ContainerUpdate' desc='...'/>

  </TaskSequence>  

Given the above file, the corresponding generated /migrate_plan/Migrate_v311.xml file for a z/OS database with distributed tables enabled might look like the following file:

<TaskSequence version='v311'...>

  	<!-- Define vars used in this file -->
 	<Vars> 		<Var name='dbName' value='DMHDB'/>
 		<Var name='tschema' value='DMH'/>
 		<Var name='tsprefix' value='DMHTS'/>
             ...
 	</Vars>

  	<Task step='v311.1' class='Sql' desc='Create a new table' autocommit='true'>
 		CREATE TABLE DMH.DMH_NEW_TABLE1 (
 			COLUMN_1            INTEGER NOT NULL,
 			COLUMN_2            INTEGER NOT NULL,
 			PRIMARY KEY (COLUMN_1))
 				IN DMH.DMHTSNEW
 		;
 	</Task>

  	<Task step='v311.2' class='v311.ContainerUpdate' desc='...' autocommit='true'>
 	</Task>

  </TaskSequence>  

The following transformations are done from the /migrate/Migrate.xml file to the /migrate_plan/Migrate_version.xml file:

Implementing the migration plan

After the migration plan is created and verified, it can be started for the changes to be made to the database.

If you have not already made a backup copy of your database, create the backup now before proceeding.

To start the migration plan, issue the following command from the migration utility directory:

dbutil.sh -runMigratePlan task_steps

See Appendix A. dbutil.sh command reference for details on specifying the dbutil.sh command.

After the migration utility starts, the current level of the database is determined based on the existence of specific tables, columns, indexes, and triggers.

The migration utility checks to see if there are any extra or missing tables, indexes, columns, or triggers and reports any discrepancies. Messages are written to the console and dbutil.log file located in the migration utility installation directory.

If any errors occur while implementing the migration plan, contact IBM Software Support before proceeding.

Completing the migration

After the migration plan implementation successfully completes, there are several tasks that must be performed before the database migration is complete.

Procedure

  1. On the ISPF Install menu, select Option 2, Bind DB2 packages and plans.
  2. If your DB2 subsystem is not configured for distributed DB2, edit the BIND PLAN control statements. The PKLIST parameter specifies each package name beginning with an asterisk. Edit all package specifications to remove the asterisk and period, as shown in the following example:
    Unedited Version Edited Version
    BIND PLAN(DMHP0700)
         QUALIFIER(DMH) -
         PKLIST(*.DMH.DMHDCOM -
                *.DMH.DMHDROL -
                *.DMH.DMHD233 -
    . . .
    BIND PLAN(DMHP0700) -
         QUALIFIER(DMH) -
         PKLIST(DMH.DMHDCOM -
                DMH.DMHDROL -
                DMH.DMHD233 -
    . . .
  3. Submit the job by issuing the SUB command.
  4. Check each step in the JESYSMSG JES2 for a condition code of 0000.
  5. On the ISPF Install menu, select Option 3, Grant DB2 privileges.
  6. You are presented with a JCL stream. Edit the SQL statements if you need to modify access to the database.
  7. Submit the job by issuing the SUB command.
  8. Check the steps in the JESYSMSG JES2 output for a condition code of 0000. If this is not the first run of the JCL to grant DB2 privileges, the DMHTEP2 step name might result in a condition code of 0004. To verify the condition code for your job, review the SYSPRINT DMHTEP2 output, scanning for SQLCODE = 562 on any of the GRANT statements. Below is an example of the output messages for this condition for package DMH.DMHDCOM:
    ***INPUT STATEMENT:  GRANT ALL ON PACKAGE DMH.DMHDCOM TO PUBLIC;
    SQLWARNING ON GRANT    COMMAND, EXECUTE  FUNCTION
    RESULT OF SQL STATEMENT:
    DSNT404I SQLCODE = 562, WARNING:  A GRANT OF A PRIVILEGE
    WAS IGNORED BECAUSE THE GRANTEE ALREADY HAS THE PRIVILEGE FROM
    THE GRANTOR
    DSNT418I SQLSTATE = 01560 SQLSTATE RETURN CODE
    You can ignore this condition.
  9. If there are any errors (except SQLCODE = 562) correct them, and then resubmit the job.
  10. Using the ISPF Init menu, select Option 3 Utilities, then Option 2 Submit RUNSTATS.
  11. Submit the job by issuing the SUB command.
  12. Check each step in the JESYSMSG JES2 output for a condition code of 0000.
  13. If there are any errors, correct them, and then resubmit the job.
  14. Optionally, after all the migration tasks have successfully completed, you can return to the OMVS shell and delete the migration utility files and directory, using the following command: rm -rf /usr/lpp/dmh/directory_name. If you specified a staging directory in the User.cfg file, you should delete that directory as well.

Database schema changes

Depending on the schema level the database is being migrated to, there might exist some special circumstances that require a manual effort, post migration, or both for a database to be considered fully migrated and ready for user activity

Remember that the migration path is dependent on the current version of the database and the level being migrated to. You should review the changes starting with the "from" version level and each interim level until you reach the final target version level. For example, if you are migrating from v552 to v600, you should review changes for v554 and v600.

The following notes highlight particular changes and user actions, when required, for specific migration levels. This information should be reviewed and understood prior to the start of any migration effort.

Database schema changes for v420

WebSphere Application Server distributed scan roots

If you have WebSphere Application Server scan roots defined in the database, you will need to manually update those scan roots through the web UI after performing this migration. This is because necessary information about these crawl roots are specified in XML files on the distributed host, which is not accessible from here.

Runtime environments specified for distributed scan roots

If you have mapped any runtime environments to distributed scan roots in Admin.cfg, you will need to manually create these runtime environments again through the web user interface and associate these with the scan roots through the UI. This is because the associations specified in Admin.cfg and the associated runtime environment XML files are not accessible from here.

Distributed sites

If you have previous scanned distributed assets, you must run the distributed scanner once to update the contents of the DMH_SITE table. Failure to do so may result in 404 errors when attempting to view distributed source code through the web interface. Note that you do not need to actually scan any distributed scan roots, but rather simply start the scanner. The DMH_SITE initialization occurs at the beginning of all scans

Database schema changes for v421

Rebuilding run units

Performance improvements for processing and storing executable names, such as run unit names and aliases, have been implemented in the V4.2.1 PTF. The majority of the changes are handled automatically using the Migration Utility. However, as a result of these changes, all run units must be rebuilt in order to re-establish their calling hierarchies. The Migration Utility will change the Analysis status of each run unit in order to force it to be rebuilt during the next invocation of the MVS postprocessor. Therefore, after all PTF installation, migration and configuration tasks are completed, and the WebSphere Application Server has been restarted, the MVS postprocessor should be started using the option presented in the drop-down list on the Analysis queue information pag, for exampleDatabase > Analysis queue. You should complete this post-migration task prior to any other user activity, such as: additional inventory and impact analysis. There is a reminder message that will be produced at the end of the migration for this level.

Analyzing programs that contain BMS references

Performance improvements for handling program references to BMS maps and map sets have been implemented in the V4.2.1 PTF. However, existing metadata does not contain enough details to completely and automatically retrofit the necessary changes. Therefore, additional manual effort is required to migrate the data for existing programs that contain CICS SEND and RECEIVE MAP commands. We encourage users to reanalyze the files listed in the BMS_Reference.txt, either through the browser interface by queuing the Container or individual Files for analysis, or via the Batch Update JCL provided in &hlq.SDMHCNTL(DMHJCLBU) or &hlq.CONFIG.CNTL(DMHJCLBU). If you do not reanalyze the identified programs, the BMS map and map set definition results presented on the "Application" and "Site" details pages will be incorrect. There is a reminder message that will be produced at the end of the migration for this level.

Database schema changes for v510

Considerations and changes

Starting with v510, when a missing copy or include file condition is detected, the WebSphere Studio Asset Analyzer scanners will consistently report message DMH1310E, instead of outputting one of several other messages, such as AES1026E, AES1027E, AES1036E, AES1038E, DMH3200E, DMH3201E, IGYLI0048, IGYLI0049, IBM1848I, IBM1851I, and IBM3841I. These messages are now obsolete and will, over time with re-analysis of assets, be replaced with reporting only DMH1310E when a missing file error occurs.

Any existing source files that are reporting missing files will have Analysis status=Error and are likely already under investigation for resolution. You should prioritize reanalyzing the files listed in the Obsolete_Messages.txt, either through the browser interface by queuing the Container or individual Files for analysis, or using the Batch Update JCL provided in &hlq.SDMHCNTL(DMHJCLBU) or &hlq.CONFIG.CNTL(DMHJCLBU). This will eliminate the obsoleted messages from the DMH_ACTIVITY_LOG table, ensure accurate reporting of the missing components, and take advantage of the performance improvement for presenting the error data in the browser interface. There is a reminder message that will be produced at the end of the migration for this level.

Summary of database changes

Table 1. New tables, spaces, and indexes
Table Table space Indexes

DMH_CICS_PIPELINE

&dmhtsqul.PIP

RDIPIP1

DMH_CICS_URIMAP

&dmhtsqul.URI

RDIURI1

DMH_CICS_WEBSERVCE

&dmhtsqul.WBS

RDIWBS1

DMH_LINK

&dmhtsqul.LNK

RDILNK1

DMH_LINK_DATA

&dmhtsqul.LDS

RDILDS1, RDILDS2

DMH_LINK_DATA_L

&dmhtsqul.LDL

RDILDL1, RDILDL2

DMH_LINK_DATA_TYPE

&dmhtsqul.LDT

RDILDT1

DMH_LINK_SOURCE

&dmhtsqul.LKS

RDILKS1, RDILKS2

DMH_LINK_TARGET

&dmhtsqul.LKT

RDILKT1, RDILKT2

DMH_LINK_TYPE

&dmhtsqul.LKO

RDILK01

DMH_USER

&dmhtsqul.URS

RDIURS1, RDIURS2

DMH_USER_SETTING

&dmhtsqul.USS

RDIUSS1

In addition, if distributed asset support is enabled:

Table 2. New tables, spaces, and indexes for distributed asset support

Table

Table space

Indexes

DMH_WS_ENDPOINT

&dmhtsqul.WSE

WOPI001, WSEI001, WSEI002

DMH_WS_HANDLER

&dmhtsqul.WSH

DMH_WS_MTD_MAP

&dmhtsqul.WMM

DMH_WS_PKG_MAP

&dmhtsqul.WPM

DMH_WS_PORTCOM

&dmhtsqul.WSP

WSPC001

DMH_WS_SEI_MAP

&dmhtsqul.WSI

WEII001

DMH_WS_SRV_MAP

&dmhtsqul.WSM

WRVI001

DMH_WS_TYP_MAP

&dmhtsqul.WTM

WTPM001

DMH_WSDL

&dmhtsqul.WDL

WWPI001

DMH_WSDL_BIND

&dmhtsqul.WDB

WBPI001

DMH_WSDL_IMPORT

&dmhtsqul.WDI

WDII001

DMH_WSDL_OPR

&dmhtsqul.WDO

DMH_WSDL_PORT

&dmhtsqul.WDP

WDPO001, WDPO002

DMH_WSDL_PORTTYPE

&dmhtsqul.WDR

WYPI001

DMH_WSDL_SERVICE

&dmhtsqul.WDS

WXPI001, WSRI001, WSRI002

DMH_WSDL_TYPE

&dmhtsqul.WDY

WTPI001, WDTI001

Table 3. New views
View Base tables

DMH_LNK_RELD_ASSTS

DMH_LINK, DMH_LINK_SOURCE, DMH_LINK_TARGET

Table 4. New alternate indexes for existing tables
Index Table

RDIACL4

DMH_ACTIVITY_LOG

RDICHI5

DMH_CHANGE_IMPACT

RDIIRU3

DMH_INVOKE_RU

Table 5. New triggers
Type Trigger
Inventory count triggers
DMHTR60D
DMHTR60I
DMHTR60U
DMHTR61D
DMHTR61I
DMHTR61U
DMHTR62D
DMHTR62I
DMHTR62U
Inventory delete triggers
DMHTRDC1
DMHTRIC1
DMHTRLD1
DMHTRLD2
DMHTRLK1
DMHTRLL1
DMHTRLS1
DMHTRLS2
DMHTRLT1
DMHTRLT2
DMHTRMA1
DMHTRMT1
DMHTRPI1
DMHTRRN1
DMHTRSI1
DMHTRUM1
DMHTRUS1
DMHTRWB1

The following are new triggers if distributed asset support is enabled:

DMHTRAR1
DMHTRCE1
DMHTREL1
DMHTREP1
DMHTREP2
DMHTREP3
DMHTREP4
DMHTREP5
DMHTREP6
DMHTREP7
DMHTRFB1
DMHTRGE1
DMHTRGS1
DMHTRJD1
DMHTRMF1
DMHTRMS1
DMHTRNO1
DMHTRPA1
DMHTRTL1
DMHTRTX1
DMHTRUR1
DMHTRWC1
DMHTRWD1
DMHTRWJ1
DMHTRWS1
DMHTR2FL
DMHWSDL1
DMHWSDP1
Table 6. New constant strings
Table String
DMH_ATTR_NAME table
181, 'COBOL scanner - configured'
182, 'COBOL language - configured'
183, 'COBOL compiler options - configured'
184, 'PL/I scanner - configured'
185, 'PL/I compiler options - configured'
186, 'SQL options - configured'
191, 'COBOL scanner - actual'
192, 'COBOL language - actual'
193, 'COBOL compiler options - actual'
194, 'PL/I scanner - actual'
195, 'PL/I compiler options - actual'
196, 'SQL options - actual'
DMH_CMPNT_TYPE table
60, 'CICSWebservice'
61, 'CICSPipeline'
62, 'CICSUrimap'
DMH_LINK_DATA_TYPE table
201, 'Name'
202, 'Description'
203, 'User id for the creator of the link'
204, 'Time and date of link creation'
205, 'User id that last updated the link'
206, 'Time and date of last link update'
210, 'Comma-delimited list of search patterns in the format: AssetType.id=id, Attribs.ignoreCase,Name=pattern,Attrib=value...'
211, 'Comma-delimited list of search patterns in the format: AssetType.id=id, Attribs.ignoreCase,Name=pattern,Attrib=value...'
DMH_LINK_TYPE table
101, 'User-defined relationship'
102, 'Link between an EJB and a DB2 Table'
DMH_STMT_TYPE table:
8286, 'EXEC CICS CONVERTTIME'
8287, 'EXEC CICS DELETE CONTAINER CHANNEL'
8288, 'EXEC CICS GET CONTAINER CHANNEL'
8289, 'EXEC CICS INVOKE WEBSERVICE'
8290, 'EXEC CICS MOVE CONTAINER CHANNEL'
8291, 'EXEC CICS PUT CONTAINER CHANNEL'
8292, 'EXEC CICS SOAPFAULT ADD'
8293, 'EXEC CICS SOAPFAULT CREATE'
8294, 'EXEC CICS SOAPFAULT DELETE'
8295, 'EXEC CICS START TRANSID CHANNEL'
8296, 'EXEC CICS WEB CLOSE'
8297, 'EXEC CICS WEB CONVERSE'
8298, 'EXEC CICS WEB ENDBROWSE FORMFIELD'
8299, 'EXEC CICS WEB OPEN'
8300, 'EXEC CICS WEB PARSE URL'
8301, 'EXEC CICS WEB READ FORMFIELD'
8302, 'EXEC CICS WEB READNEXT FORMFIELD'
8303, 'EXEC CICS WEB STARTBROWSE FORMFIELD'
8400, 'EXEC CICS CREATE PIPELINE'
8401, 'EXEC CICS CREATE URIMAP'
8402, 'EXEC CICS CREATE WEBSERVICE'
8403, 'EXEC CICS DISCARD PIPELINE'
8404, 'EXEC CICS DISCARD URIMAP'
8405, 'EXEC CICS DISCARD WEBSERVICE'
8406, 'EXEC CICS EXTRACT STATISTICS'
8407, 'EXEC CICS INQUIRE HOST'
8408, 'EXEC CICS INQUIRE PIPELINE'
8409, 'EXEC CICS INQUIRE URIMAP'
8410, 'EXEC CICS INQUIRE WEBSERVICE'
8411, 'EXEC CICS PERFORM PIPELINE'
8412, 'EXEC CICS SET HOST'
8413, 'EXEC CICS SET PIPELINE'
8414, 'EXEC CICS SET URIMAP'
8415, 'EXEC CICS SET WEBSERVICE'
DMH_TOOL table 0001000600, 'External interface'
Table 7. Changed alternate indexes
Index Table

RDIRUC2

DMH_RUN_UNIT_CNTNT

RDIRUC3

DMH_RUN_UNIT_CNTNT

Table 8. Changed alternate indexes for distributed asset support
Index Table

FISI001

DMH_FIELD_BYTECODE

LCSI001

DMH_LINKS_C

LHSI001

DMH_LINKS_H

LRSI001

DMH_LINKS_R

MESI001

DMH_METHOD_BYTECODE

SASI001

DMH_SCOPE_TO_ASSET

TGSI001

DMH_TAG

Table 9. Changed triggers
Type Trigger
Inventory delete triggers
DMHTRAP1
DMHTRBJ1
DMHTRCG1
DMHTRCO1
DMHTRCU1
DMHTRDS1
DMHTRID1
DMHTRIP1
DMHTRLI1
DMHTRME1
DMHTROD1
DMHTROR1
DMHTRPJ1
DMHTRTR1
DMHTRSB1
DMHTRSB3
DMHTRSB7
Table 10. Changed triggers for distributed asset support
Type Trigger
Inventory delete triggers
DMHTRAS1
DMHTRCL1
DMHTRCP1
DMHTREA1
DMHTREJ1
DMHTREJ4
DMHTRFL1
DMHTRHT2
DMHTRJA1
DMHTRJC1
DMHTRJP2
DMHTRMB1
DMHTRRR1
DMHTRSE1
DMHTRTG1
DMHTRUA1
DMHTRUT1
DMHTRVP2
DMHTRWG1
DMHTRWP1
DMHTRWR1
DMHTRWU1
DMHTRWV1
DMHTRW41
DMHTRXM1
Table 11. Changed constant strings
Table Constant String
DMH_ATTR_NAME table
602, 'Halstead effort'
603, 'Essential complexity'
604, 'Cyclomatic complexity'
605, 'File size (bytes)'
DMH_SYSTEM_ATTR table 500,510
DMH_TOOL table 1000007, 'Internal Metrics Processor'
Table 12. Deleted alternate indexes for distributed asset support
Index Table

CRSI001

DMH_CRAWL_ROOT

FISI002

DMH_FIELD_BYTECODE

MESI002

DMH_METHOD_BYTECDE

Table 13. Deleted constant strings
Table Constant String
DMH_CMPNT_TYPE table 206, 'Metrics'
DMH_HTML_LINK table 10001, 3, 9, 'com.ibm.dmh.uiview.DmhViewMemberUsedByLinkHandler'
DMH_PROJECT_TYPE table 3000, 'Code extraction'

Database schema changes for v511

Summary of database changes

Table 14. New alternate indexes
Index Table
RDIATA4 DMH_DATASET
RDIPB2 DMH_IMS_PCB_SEGMNT
Table 15. New constant strings
Table Constant string
DMH_ATTR_NAME table 213, 'End-to-end postprocessing'
DMH_RESOLVE_TYPE table 7, 'Resolved by Recursion'
Table 16. Changed table spaces
Table space Modification
&dmhtsqul.CHF Added explicit LOCKSIZE ROW
&dmhtsqul.CHP Added explicit LOCKSIZE ROW
Table 17. Changed tables
Table Modification
DMH_BATCH_JOB Column CONCAT_SET_ID dropped
DMH_COMPILE_UNIT Column CONCAT_SET_ID dropped
DMH_DATA_STORE Column FD_ALIAS_ID dropped
Table 18. Changed triggers
Type Trigger
Inventory delete triggers
DMHTRDS1
DMHTRME1
DMHTRRN1
Table 19. Changed constant strings
Table Constant string
DMH_SYSTEM_ATTR table 500,511
Table 20. Deleted constant strings
Table Constant string
DMH_ATTR_NAME table 212, 'Distributed Scanner'

Database schema changes for v552

The z/OS Scan & Load Technology Preview for Rational Asset Analyzer V5.5 Fix Pack 1 required a new PDS Resource Manager be manually inserted into the database if IMS Subsystem, Assembler source data, or both was to be imported. When migrating to database schema v552, this row will be removed if found, and the migration utility will proceed to insert new rows for PDS, ASM, CICS DB2 and IMS resource managers as required for Rational Asset Analyzer V5.5 Fix Pack 2.

Summary of database changes

Table 21. New tables and indexes
Tables Primary Key Index Alternate Index
DMH_BP_MODEL MODEL_ID
DMH_BP_PROCESS MODEL_ID + PROCESS_ID
DMH_BR_POLICY POLICY_ID
DMH_BR_POLICY_RS PLOICY_ID + RULE_SET_ID
DMH_BR_RULE RULE_ID
DMH_BR_RULE_SET RULE_SET_ID
DMH_BR_RS_RULE RS_RULE_ID
DMH_BT_CATEGORY CATEGORY_ID
DMH_BT_RELATD_TERM TERM_ID_1 + TERM_ID_2
DMH_BT_SYNONYM TERM_ID_1 + TERM_ID_2
DMH_BT_TERM TERM_ID
DMH_FILE_LITERAL LITERAL_ID + FILE_ID RDILTF2
DMH_FILE_SYMBOL SYMBOL_ID + FILE_ID RDISTF2

The following are new component types in v552:

Table 22. New constant strings
Table Constant String
DMH_QUEUE_TYPE 18, Load import file
DMH_RESOURCE_MNGR
  • 2, MVS Default Catalog
  • 3, Assembler Source
  • 4, CICS Online Region
  • 5, DB2 System 5.6, IMS Subsystem
DMH_TOOL
  • 0001000800, SAP ABAP Analyzer
  • 0001000801, Visual Basic Analyzer
  • 0001000802, EGL Analyzer

Table 23. Changed constant strings
Table Constant String
DMH_QUEUE_TYPE
  • 1, Build run unit content
  • 2, Resolve dynamic references
  • 4, Analyze file
  • 5, Rebuild run unit content
  • 8, Identify file language
  • 10, Analyze container
  • 11, Identify file type
  • 15, Establish PSB for run unit
  • 16, Connect PSB to run unit
  • 17, Delete file
  • 20, Scan distributed scan root
  • 21, Postprocess distributed scan root
DMH_SYSTEM_ATTR 500, 552

Table 24. Deleted constant strings
Table Constant String
DMH_LIBRARY_TYPE
  • ACB, IMS ACB Library
  • DBD, IMS DBD Library
  • PSB, IMS PSB Library

Database schema changes for v554

To provide support for multiple named tables in SQL DECLARE CURSOR statements, the SQL_TABLE_ID has been dropped from the DMH_SQL_CURSOR table to eliminate the limitation of associating a single DB2 table to the cursor. The potential 1 - n DB2 tables are now recorded in a new relationship table, DMH_SQL_CURSOR_TAB. During migration, programs found to contain EXEC SQL DECLARE CURSOR statements will be queued to be analyze again.. After running the Analysis queue processor, the user interface is expected to reflect EXEC SQL FETCH statement references to the 1 - n DB2 tables comprising the cursor. Following migration, but before running the Analysis queue, these DB2 table references from cursors will not be visible in the user interface.

If the database was manually migrated to v554 to include an interim development driver to resolve the issue described above, you might receive the following warning message specific to the DMHTRCU1 trigger. If you are migrating to v600, you can ignore this warning. The DMHTRCU1 trigger will be replaced when the migration tasks for v600 are performed.

Comparison of current database schema with expected schema at this particular level showed 1 differences.
      1: Trigger target tables on 'DMHTRCU1' was expected to be 'DMH_ACTIVITY_LOG-D,DMH_ACTUAL_PARM-D,DMH_ANALYSIS_QUEUE-D,DMH_CHANGE_FOCU
      S-D,DMH_CHANGE_IMPACT-D,DMH_LINK_SOURCE-D,DMH_LINK_TARGET-D,DMH_CONTROL_XFER-D,DMH_CU_CHAR_ATTR-D,DMH_CU_DATA_USE-D,DMH_CU_INCLUDE-D
      ,DMH_CU_LABEL-D,DMH_CU_NUM_ATTR-D,DMH_CU_STMT-D,DMH_DATA_ELMT_ALAS-D,DMH_DATA_RECORD-D,DMH_DATA_STORE-D,DMH_DATA_STORE_DEA-D,DMH_ENT
      RY_POINT-D,DMH_FORMAL_PARM-D,DMH_LIT_FOR_DEA-D,DMH_SQL_COLUMN-D,DMH_SQL_CURSOR-D,DMH_SQL_CURSOR_TAB-D,DMH_SQL_TABLE-D,DMH_SQL_PROC-D
      ,DMH_RUN_UNIT_CNTNT-D,DMH_RUN_UNIT_CNTNT-U,DMH_RUN_UNIT_CNTNT-U' but was actually 'DMH_ACTIVITY_LOG-D,DMH_ACTUAL_PARM-D,DMH_ANALYSIS
      _QUEUE-D,DMH_CHANGE_FOCUS-D,DMH_CHANGE_IMPACT-D,DMH_LINK_SOURCE-D,DMH_LINK_TARGET-D,DMH_CONTROL_XFER-D,DMH_CU_CHAR_ATTR-D,DMH_CU_DAT
      A_USE-D,DMH_CU_INCLUDE-D,DMH_CU_LABEL-D,DMH_CU_NUM_ATTR-D,DMH_CU_STMT-D,DMH_DATA_ELMT_ALAS-D,DMH_DATA_RECORD-D,DMH_DATA_STORE-D,DMH_ 
     DATA_STORE_DEA-D,DMH_ENTRY_POINT-D,DMH_FORMAL_PARM-D,DMH_LIT_FOR_DEA-D,DMH_SQL_COLUMN-D,DMH_SQL_CURSOR-D,DMH_SQL_TABLE-D,DMH_SQL_PRO
      C-D,DMH_RUN_UNIT_CNTNT-D,DMH_RUN_UNIT_CNTNT-U,DMH_RUN_UNIT_CNTNT-U'.
 These differences could indicate custom changes made to the database.
      These differences should be investigated before proceeding with the migration plan.      
Proceed creating migration plan?  Continue? {Y/N}

If you are not migrating to v600, you should drop and create the trigger again as described in the installation instructions provided along with the development driver.

Summary of database changes

Table 25. New tables and indexes
Table Primary Key Index Alternate Index
DMH_SQL_CURSOR_TAB COMP_UNIT_ID + SQL_CURSOR_ID + SQL_TABLE_ID

Table 26. Changed constant strings
Table
DMH_SYSTEM_ATTR 500, 554

Database schema changes for v600

In previous versions of Rational Asset Analyzer, metadata produced by the MVS and distributed scanners were populated in the database under different Sites, even if the two scanners were scanning the same directory structure. This resulted in two separate and distinct sets of Containers and Files. Beginning in v600, inventory collection for MVS and distributed assets has been merged into a single process utilizing the same wizard. This approach produces a single set of Containers and Files, provides information about MVS and distributed inventory to be presented in a unified fashion, and significantly simplifies the database model. In addition, this approach also eliminates previously required manual steps for distributed scanning and scanners.

Considerations and changes

The following items should be reviewed.

Container and File scanning consolidation
The migration utility does not merge the two sets of Containers and Files. Therefore, all existing distributed asset metadata in the database will be dropped. The original scan roots will be detected and queued again for analysis. After running the Analysis queue processor, the J2EE asset data is expected to be in the same state as before migration.

Additional considerations:

HTTP_IP_NAME setting
For any Sites already in the inventory, the value for the HTTP_IP_NAME column of the DMH_SITE table should be reviewed to validate it properly reflects the scanner daemon URL. In prior releases of Rational Asset Analzyer, the user interface allowed setting this column to any value a user wanted. As a result, there may be a non-blank value in the column that is not relevant in V6.0.0. You must first update the HTTP_IP_NAME column(s) before starting the Analysis queue processor for re-scanning the distributed assets. In some cases, this might be a simple update to change the port number (from http://somehost:9080 to http://somehost:8080). In other cases, you may want to set the HTTP_IP_NAME to an empty string because the Site will not be used for scanning distributed assets. These changes can be accomplished using the SQL UPDATE statement (e.g. UPDATE XXX.DMH_SITE SET HTTP_IP_NAME ... ). Refer to DB2 SQL Reference guides for further information about syntax, rules, and examples.

Support for scanning WebSphere Application Server has been removed. During migration to V6.0.0, all existing metadata and database objects for WebSphere Application Server assets will be dropped.

Business Rule Mining The database objects for Business Rule Mining have changed since their original introduction in the Technology Preview. There is no migration path for the Technology Preview. Therefore, during migration to V6.0.0, all existing business rule objects in the database will be dropped.

Support for scanning WebSphere Application Server has been removed.
During migration to V6.0.0, all existing metadata and database objects for WebSphere Application Server assets will be dropped.
Business Rule Mining
The database objects for Business Rule Mining have changed since their original introduction in the Technology Preview. There is no migration path for the Technology Preview. Therefore, during migration to V6.0.0, all existing business rule objects in the database will be dropped.
Users and user attributes
As groundwork for future expansion and persistence of additional user preferences and settings, the DMH_USER_CHAR_ATTR and DMH_USER_SETTING tables have been dropped from the data model. During migration, user names from the DMH_USER_CHAR_ATTR table will be assigned an ID and inserted into the DMH_USER table. Corresponding user attributes stored in the DMH_USER_CHAR_ATTR table will be moved to the DMH_USER_ATTR table, which has a foreign key to the DMH_USER table.
Custom Query inventory
Review the data model changes for v600. They are presented below in Summary of Changes. Following migration, inspect the Is valid column on the Custom query summary page, as this information indicates if the corresponding custom query contains valid SQL for the current Rational Asset Analyzer database. Possible values are:
Yes (valid)
This is valid state.
No
No identifies a custom query that is no longer valid because the underlying database schema has changed.
Unknown
Unknown indicates a custom query has not be executed since a database migration has occurred, and therefore, it is not known whether it is valid or not. To revalidate custom queries that are in the Unknown state, execute the custom query to re-establish a Yes or No state for the Is valid column, or execute the Validate table action (requires a user ID with Administrative authority).
SQL Cursor and DB2 Table reference resolution
To provide support for multiple named tables in SQL DECLARE CURSOR statements, the SQL_TABLE_ID has been dropped from the DMH_SQL_CURSOR table to eliminate the limitation of associating a single DB2 table to the cursor. The potential 1 - n DB2 tables are now recorded in a new relationship table, DMH_SQL_CURSOR_TAB. During migration, programs found to contain EXEC SQL DECLARE CURSOR statements will be queued for re-analysis. After running the Analysis queue processor, the user interface is should reflect EXEC SQL FETCH statement references to the 1 - n DB2 tables comprising the cursor. Following migration, but before running the Analysis queue, these DB2 table references from cursors will not be visible in the user interface.
Distributed asset support always enabled
Previously optional creation of distributed asset database objects is now standard and mandatory. As a result of migration, the Rational Asset Analyzer database will contain all the database objects for distributed asset support.

Summary of database changes

Table 27. New tables and Indexes
Table Table space Indexes

DMH_AMF_CLASSPATH

&dmhtsqul.AR1

DMHAR1S1

DMH_APPL_CONTAINER

&dmhtsqul.AP2

DMHAP2P1

DMH_APPL_FILE

&dmhtsqul.AP1

DMHAP1P1

DMH_BP_MODEL

&dmhtsqul.BPM

RDIBPM1

DMH_BP_PROCESS

&dmhtsqul.BPP

RDIBPP1

DMH_BR_CAT_RULE

&dmhtsqul.BRR

RDIBRR1

DMH_BR_RULE

&dmhtsqul.BRU

RDIBRU1

DMH_BRM_CATEGORY

&dmhtsqul.BRC

RDIBRC1

DMH_BT_CAT_TERM

&dmhtsqul.BTC

RDIBTC1, RDIBTC2

DMH_BT_FACT

&dmhtsqul.BTF

RDIBTF1, RDIBTF2, RDIBTF3, RDIBTF4

DMH_BT_TERM

&dmhtsqul.BTT

RDIBTT1, RDIBTT2, RDIBTT3

DMH_BYTECODE_ROOT

&dmhtsqul.BCR

DMHBCRP1

DMH_CONTAINER_TC

&dmhtsqul.CTC

RDICTC2, RDICTC3

DMH_EAR_EJBJARS

&dmhtsqul.EA2

DMHEA2S1, DMHEA2S2

DMH_EAR_JEECLIENTS

&dmhtsqul.EA3

DMHEA3S1, DMHEA3S2

DMH_EAR_WARS

&dmhtsqul.EA1

DMHEA1S1, DMHEA1S2

DMH_FILE_LITERAL

&dmhtsqul.LTF

RDILTF1, RDILTF2

DMH_FILE_SYMBOL

&dmhtsqul.STF

RDISTF1, RDISTF2

DMH_HIERARCHY_JBC

&dmhtsqul.HBC

DMH_JAR

&dmhtsqul.JAR

DMHJARP1

DMH_LINKS_C_POS

&dmhtsqul.LK5

DMHLK5S1

DMH_REFS_JBC

&dmhtsqul.RF1

DMHRF1S1, DMHRF1S2

DMH_REFS_JBF

&dmhtsqul.RF2

DMHRF2S1, DMHRF2S2

DMH_REFS_JBM

&dmhtsqul.RF3

DMHRF3S1, DMHRF3S2

DMH_REFS_JNDI

&dmhtsqul.RF5

DMHRF5S1, DMHRF5S2

DMH_REFS_URL

&dmhtsqul.RF4

DMHRF4S1, DMHRF4S2

DMH_SQL_CURSOR_TAB

shares &dmhtsqul.SQU

RDISQU2

DMH_USER_ATTR

&dmhtsqul.USA

RDIUSA1

DMH_WS_JAXRPC

&dmhtsqul.WSJ

DMHWSJP1

Table 28. New constant types and strings
Table Code or ID, and Description string
DMH_ATTR_NAME
102, URL template
801, Web user settings 1
802, Web user settings 2
803, Web user settings 3
804, Web user settings 4
805, Web user settings 5
DMH_CMPNT_TYPE
400, Business category
401, Business term
402, Business term property
430, Business process model
431, Business activity
480, Business rule
DMH_CONT_TEMPLATE 5, 1, NTFS, Directory, Windows/AIX Directory, Y
DMH_LINK_TYPE
103, Business rule mining declared relationship
104, Business rule mining discovered relationship
DMH_QUEUE_TYPE
12, Load container scan import file
13, Container scan cleanup
14, Resolve DB2 proc implementation
18, Load import file
DMH_RESOURCE_MNGR 5, 0, Windows/AIX File System, NTFS
DMH_RM_CHAR_ATTR 5, 102, -1, file
DMH_TOOL
0001000800, SAP ABAP Analyzer
0001000801, Visual Basic Analyzer
0001000802, EGL Analyzer
DMH_USER 0, Unknown

Tables listed below have undergone modifications to their definitions, such as added, changed, deleted, and renamed columns or primary keys. The table itself may have been renamed, as indicated, and/or had indexes added, changed, and dropped.

Table 29. Changed tables and indexes
Tables New table name New and changed indexes Dropped indexes
DMH_ANALYSIS_QUEUE RDIANQ1
DMH_CONTAINER RDILIB2, RDILIB3, RDILIB6, RDILIB7
DMH_DB2_PROC
DMH_FILE RDIMEM4
DMH_SQL_CURSOR RDISQU1

If you did not have distributed asset support enabled in V511, consider the following list of tables for distributed assets as new, rather than changed, for V600.

Table

Renamed to

New and/or changed index(es)

Dropped index(es)

DMH_ARCHIVE

DMHARCP1, DMHARCS1

ARPI001, ARSI001

DMH_ARCH_MANIFEST

DMHARMP1

AMPI001

DMH_BYTECODE_CLASS

DMHCLFP1, DMHCLFS1, DMHCLFS2, DMHCLFS3

CLPI001, CLSI001

DMH_CPP

DMHCPPP1, DMHCPPS1

CPPI001

DMH_CPP_DEF_MACROS

DMHCP4S1

CPSI004

DMH_CPP_GLBL_FUNCS

DMHCP6S1

CPSI006

DMH_CPP_GLBL_VARS

DMHCP5S1

CPSI005

DMH_CPP_INC_FILES

DMHCP3S1

CPSI003

DMH_CPP_MBMR_FUNCS

DMHCP2S1

CPSI002

DMH_CPP_MBMR_VARS

DMHCP1S1

CPSI001

DMH_CRAWL_ROOT

DMH_SCAN_ROOT

DMHCR1P1

CRPI001

DMH_CRAWL_ROOT_DAT

DMH_SCAN_ROOT_DAT

DMHCR1S1

CRSI002

DMH_EAR

DMHEARP1, DMHEARS1

EAPI001, EASI001, EASI002

DMH_EJB

DMHEJBP1, DMHEJBS1

EJPI001

DMH_EJBJAR

DMHEJJP1, DMHEJJS1

EBPI001, EBSI001, EBSI002

DMH_EJB_ENTITY

DMHEJ5P1

EJPI002

DMH_EJB_ENV_ENTRYS

DMHEJ1S1

EJSI001

DMH_EJB_LOCAL_REF

DMHEJ0S1

EJPI003

DMH_EJB_MSG_DRVN

DMHEJ6P1

EJPI004

DMH_EJB_QUERY

DMHEJ4S1

EJPI005

DMH_EJB_REF_NHRTL

DMH_EJB_REF

DMHEJ2S1

EJSI002

DMH_EJB_RELATION

DMHEJ8S1

EJPI006

DMH_EJB_RSRC_ENVS

DMHEJ9S1

EJPI007

DMH_EJB_RSRC_REFS

DMHEJ3S1

EJSI003

DMH_EJB_SESSION

DMHEJ7P1

EJPI008

DMH_EVENT_LISTENER

DMHEVTP1, DMHEVTS1

EVLI001

DMH_FIELD_BYTECODE

DMH_JBC_FIELD

DMHFLDP1, DMHFLDS1, DMHFLDS2

FIPI001, FISI001

DMH_FILTER

DMHFL1P1, DMHFL1S1

FLTI001

DMH_FLTR_PARAMS

DMHFL2S1

FLTI002

DMH_HTML

DMHHTMP1

HTPI001, HTSI001

DMH_HTML_HEADINGS

DMHHT2S1

HTSI003

DMH_J2EECLIENT

DMH_JEECLIENT

DMHJ2EP1, DMHJ2ES1

JCPI001

DMH_JAVA

DMHJAVP1, DMHJAVS1

JAPI001

DMH_JAVA_EXCEPTION

DMHJV1S1

JASI001

DMH_JAVA_FIELDS

DMHJV2S1

JASI002

DMH_JAVA_IMPORTS

DMHJV3S1

JASI003

DMH_JAVA_METHODS

DMHJV4S1

JASI004

DMH_JAVA_MTHD_INVK

DMHJV5S1

JASI005

DMH_JC_ENV_ENTRYS

DMHJ21S1

JCSI001

DMH_JC_REF_NHRTL

DMH_JC_EJB_REF

DMHJ22S1

JCSI002

DMH_JC_RSRC_REFS

DMHJ23S1

JCSI003

DMH_JSP

DMHJSPP1

JSPI001, JSSI001

DMH_JSP_BEANS

DMHJS1S1

JSSI002

DMH_JSP_HEADINGS

DMHJS4S1

JSSI005

DMH_JSP_JAVA_FLD_T

DMHJS8S1

JSSI009

DMH_JSP_JAVA_IMPRT

DMHJS7S1

JSSI008

DMH_JSP_METHOD_USE

DMHJS9S1

JSSI010

DMH_JSP_TAGLIBS

DMHJSAS1

JSSI011

DMH_LINKS_C

DMHLK2S1, DMHLK2S2

LCSI001

DMH_LINKS_H

DMHLK3S1, DMHLK3S2

LHSI001

DMH_LINKS_R

DMHLK4S1, DMHLK4S2

LRSI001

DMH_METHOD_BYTECDE

DMH_JBC_METHOD

DMHMETP1, DMHMETS1, DMHMETS2

MEPI001, MESI001

DMH_PACKAGE

DMHPAQP1, DMHPAQS1

PQPI001

DMH_RAR

DMHRR1P1, DMHRR1S1

RARI001

DMH_RAR_CFG_PROPS

DMHRR2S2

RARI002

DMH_SERVLET

DMHSETP1, DMHSETS1

SEPI001

DMH_SRVLT_PARAMS

DMHSE1S1

SESI001

DMH_SRVLT_URL_PTRN

DMHSE2S1

SESI002

DMH_TAG

DMHTAGP1, DMHTAGS1

TGPI001, TGSI001

DMH_TAG_ATTRS_NAME

DMH_TAG_ATTRS

DMHTG1S1

TGSI002

DMH_TAG_VRBL

DMHTG2S1

TGSI003

DMH_TAGLIB

DMHTGLP1

TLPI001, TLSI001

DMH_TAGLIB_VLDTR

DMHVL1P1, DMHVL1S1

TLPI003

DMH_VLDTR_PARAM

DMHVL2S1

TLPI004

DMH_W_EJB_RF_NHRTL

DMH_WAR_EJB_REF

DMHWR2S1

WRSI004

DMH_WAR

DMHWARP1, DMHWARS1

WRPI001, WRSI001, WRSI002

DMH_WAR_ENV_ENTRYS

DMHWR1S1

WRSI003

DMH_WAR_LCL_EJBREF

DMHWR4S1

WRSI006

DMH_WAR_RSRC_REFS

DMHWR3S1

WRSI005

DMH_WS_ENDPOINT

DMHWSEP1, DMHWSES1

WOPI001, WSEI001, WSEI002

DMH_WS_HANDLER

DMHWSHS1

DMH_WS_MTD_MAP

DMHWMMS1

DMH_WS_PKG_MAP

DMHWPMS1

DMH_WS_PORTCOM

DMHWSPS1

WSPC001

DMH_WS_SEI_MAP

DMHWSIP1, DMHWSIS1

WEII001

DMH_WS_SRV_MAP

DMHWSMS1

WRVI001

DMH_WS_TYP_MAP

DMHWTMS1

WTPM001

DMH_WSDL

DMHWDLP1

WWPI001

DMH_WSDL_BIND

DMHWDBP1, DMHWDBS1

WBPI001

DMH_WSDL_IMPORT

DMHWDIS1

WDII001

DMH_WSDL_OPR

DMHWDOD1

DMH_WSDL_PORT

DMHWDPP1, DMHWDPS1

WDPO001, WDPO002

DMH_WSDL_PORTTYPE

DMHWDRP1, DMHWDRS1

WYPI001

DMH_WSDL_SERVICE

DMHWDSP1, DMHWDSS1

WXPI001, WSRI001, WSRI002

DMH_WSDL_TYPE

DMHWDYP1, DMHWDYS1

WTPI001

DMH_XML

DMHXMLP1

XMPI001

DMH_XML_ATTLIST

DMHXM1S1

XMSI001

DMH_XML_ELEMENTS

DMHXM2S1

XMSI002

Table 30. Changed views
View Modifications
DMH_LIBRARY Add column DMH_CONTAINER.HASH_ID
DMH_MEMBER Add column DMH_FILE.HASH_ID
Table 31. Changed constant strings
Table Code or ID String was String is
DMH_LANGUAGE BIN binary Binary
DMH_LANGUAGE EMP? no records No records
DMH_LANGUAGE GONE not found Not found
DMH_LANGUAGE UNKN (was UNK?) unknown Unknown
DMH_QUEUE_TYPE 1 Build run unit content - waiting for postprocessing to be invoked Build run unit content
DMH_QUEUE_TYPE 2 Resolve dynamic references - waiting for postprocessing to be invoked Resolve dynamic references
DMH_QUEUE_TYPE 4 Analyze file - waiting for postprocessing to be invoked Analyze file
DMH_QUEUE_TYPE 5 Rebuild run unit content - waiting for the user to resolve the problem - missing program, ambiguous source, and more Rebuild run unit content
DMH_QUEUE_TYPE 8 Identify file language - waiting for the user to assign the language Build run unit content
DMH_QUEUE_TYPE 10 Analyze container - waiting to be submitted for inventory collecting Analyze container
DMH_QUEUE_TYPE 11 Identify file type - waiting for postprocessing to be invoked Identify file type
DMH_QUEUE_TYPE 15 Establish PSB for run unit - waiting for postprocessing to be invoked Establish PSB for run unit
DMH_QUEUE_TYPE 16 Connect PSB to run unit - waiting for the user to assign Connect PSB to run unit
DMH_QUEUE_TYPE 17 Delete file - waiting to remove a file column Delete file
DMH_QUEUE_TYPE 21 Postprocess distributed scan root - waiting for runtime postprocessor to be invoked Postprocess distributed scan root
DMH_SYSTEM_ATTR 500 54 600
Table 32. Deleted Tables, spaces, and Indexes
Table Table space Indexes
DMH_USER_CHAR_ATTR &dmhtsqul.UCA RDIUCA1
DMH_USER_SETTING &dmhtsqul.USS RDIUSS1

If your site did not have distributed asset support enabled in V511, disregard the following list.

Table 33. Deleted Constant Types and Strings for distributed asset support
Table Table space Indexes

DMH_ASSETS

&dmhtsqul.ASS

ASPI001, ASSI001, ASSI002

DMH_CLSFIL_USDCLSS

&dmhtsqul.CL1

CLSI002

DMH_CL_DPL_EAR

&dmhtsqul.DA1

DEPI003

DMH_CL_DPL_EJB_M

&dmhtsqul.DE2

DEPI004

DMH_CL_DPL_WEB_M

&dmhtsqul.DW2

DEPI005

DMH_CL_W4AAPPSVR

&dmhtsqul.WA1

WAPI002

DMH_CVSS

&dmhtsqul.CVS

CVPI001

DMH_DEJB_RESENVREF

&dmhtsqul.DE5

DMSI004

DMH_DPL_EAR

&dmhtsqul.DEE

DEPI001, DESI001

DMH_DPL_EAR_LOC

&dmhtsqul.DA2

DEPI002

DMH_DPL_EJB

&dmhtsqul.DEJ

DJPI001

DMH_DPL_EJB_EJBREF

&dmhtsqul.DE3

DMSI002

DMH_DPL_EJB_MODULE

&dmhtsqul.DEM

DMPI001

DMH_DPL_EJB_RESREF

&dmhtsqul.DE4

DMSI003

DMH_DPL_WEB_EJBREF

&dmhtsqul.DW3

DWSI002

DMH_DPL_WEB_MODULE

&dmhtsqul.DWM

DWPI001

DMH_DPL_WEB_RESREF

&dmhtsqul.DW4

DWSI003

DMH_DWEB_RESENVREF

&dmhtsqul.DW5

DWSI004

DMH_D_EJB_MDL_IIL

&dmhtsqul.DE1

DMSI001

DMH_D_WEB_MDL_IIL

&dmhtsqul.DW1

DWSI001

DMH_FLTRMAP

&dmhtsqul.FL3

FLTI003

DMH_GENERIC

&dmhtsqul.GN1

GNRI005

DMH_HTML_FREFS

&dmhtsqul.HT1

HTSI002

DMH_HTML_HYPER_LNK

&dmhtsqul.HT3

HTSI004

DMH_JP_PATTERN

&dmhtsqul.JP1

JPPI001

DMH_JSP_FREFS

&dmhtsqul.JS3

JSSI004

DMH_JSP_FWRD_FILES

&dmhtsqul.JS2

JSSI003

DMH_JSP_HYPER_LNKS

&dmhtsqul.JS5

JSSI006

DMH_JSP_INCL_FILES

&dmhtsqul.JS6

JSSI007

DMH_M_BCD_CLS_RFS

&dmhtsqul.ME1

MESI003

DMH_M_BCD_FLD_RFS

&dmhtsqul.ME2

MESI004

DMH_M_BCD_MTHD_RFS

&dmhtsqul.ME3

MESI005

DMH_PTRN_DETECTOR

&dmhtsqul.PTD

PDPI001

DMH_RE_DAT

&dmhtsqul.RE2

RESI003

DMH_RE_JP

&dmhtsqul.RE3

RESI004, RESI005

DMH_RUNTIME_ENV

&dmhtsqul.RE1

REPI001, RESI001, RESI002

DMH_SCOPE_SPACE

&dmhtsqul.SSP

SSPI001, SSSI001, SSSI002

DMH_SCOPE_TO_ASSET

&dmhtsqul.SSA

SAPI001, SASI001

DMH_SPACE

&dmhtsqul.SPC

SPPI001

DMH_SRVLMAP

&dmhtsqul.SRV

SMPI001

DMH_SYS_INFO

&dmhtsqul.SYS

SYPI001

DMH_TAGLIB_LSTNR

&dmhtsqul.TL2

TLPI002

DMH_TEXT

&dmhtsqul.TEX

TEPI001

DMH_UNFOUND_TARGET

&dmhtsqul.UFT

UTPI001

DMH_URLMAP

&dmhtsqul.URL

UMPI001, UMSI001

DMH_W4AAPPSVR

&dmhtsqul.W4A

WAPI001

DMH_W4AECELL

&dmhtsqul.WEC

WCPI001

DMH_W4AECLONE

&dmhtsqul.WAC

WLPI001

DMH_W4AEDATASOURCE

&dmhtsqul.WAD

WSPI001

DMH_W4AEJ2CPADPLOC

&dmhtsqul.WC1

WALI001

DMH_W4AEJDBCDRIVER

&dmhtsqul.WED

WVPI001

DMH_W4AEJDBCDVRLOC

&dmhtsqul.WD1

WJDI001

DMH_W4AEJMSPROVIDR

&dmhtsqul.WEP

WPPI001

DMH_W4AEJMSPRVRLOC

&dmhtsqul.WP1

WPSI001

DMH_W4AEMAILSESSON

&dmhtsqul.WES

WMPI001

DMH_W4AENODE

&dmhtsqul.WEN

WNPI001

DMH_W4AESVRGROUP

&dmhtsqul.WEG

WEPI001

DMH_W4AESVRGROUP_M

&dmhtsqul.WG1

WEPI002

DMH_W4AEURL

&dmhtsqul.WEU

WUPI001

DMH_W4AEURLPROVIDR

&dmhtsqul.WUP

WIPI001

DMH_W4AEVIRTUALHST

&dmhtsqul.WEH

WHPI001

DMH_W4AEVRTLHSTAL

&dmhtsqul.WE1

WHPI002

DMH_W4AGENSVR

&dmhtsqul.W4G

WGPI001

DMH_W4AJ2CCONNFACT

&dmhtsqul.W4J

WJPI001

DMH_W4AJ2CRESADPTR

&dmhtsqul.W4C

W4PI001

DMH_W4AJMSCONNFACT

&dmhtsqul.W4F

WFPI001

DMH_W4AJMSDEST

&dmhtsqul.W4D

WDPI001

DMH_W4URLPRVDR_LOC

&dmhtsqul.WU1

WIPI002

Auxiliary tables listed below have been removed from the database because either the CLOB column has been dropped from the base table, or the base table has been dropped altogether. If your site did not have distributed asset support enabled in V511, disregard the following list.

Table 34. Deleted Auxiliary Tables, LOB Table spaces and Indexes

Base table

Auxiliary table

LOB table space

Index

DMH_CL_DPL_EAR

DEAT002

DETS002

DEUI002

DMH_CL_DPL_EJB_M

DMAT002

DMTS002

DMUI002

DMH_CL_DPL_WEB_M

DWAT002

DWTS002

DWUI002

DMH_CL_W4AAPPSVR

W4AT002

W4TS002

W4UI002

DMH_CPP

CPAT001

CPTS001

CPUI001

DMH_DPL_EAR

DEAT001

DETS001

DEUI001

DMH_DPL_EJB

DJAT001

DJTS001

DJUI001

DMH_DPL_EJB_MODULE

DMAT001

DMTS001

DMUI001

DMH_DPL_WEB_MODULE

DWAT001

DWTS001

DWUI001

DMH_EAR

EAAT001

EATS001

EAUI001

DMH_EJB

EJAT001

EJTS001

EJUI001

DMH_EJBJAR

ERAT001

ERTS001

ERUI001

DMH_HTML

HTAT001

HTTS001

HTUI001

DMH_J2EECLIENT

J2AT001

J2TS001

J2UI001

DMH_JAVA

JAAT001

JATS001

JAUI001

DMH_JSP

JSAT001

JSTS001

JSUI001

DMH_PACKAGE

PQAT001

PQTS001

PQUI001

DMH_PTRN_DETECTOR

PDAT001

PDTS001

PDUI001

DMH_RAR

RRAT001

RRTS001

RRUI001

DMH_SERVLET

SEAT001

SETS001

SEUI001

DMH_TAG

TAAT001

TATS001

TBUI001

DMH_TAGLIB

TLAT001

TLTS001

TLUI001

DMH_TEXT

TEAT001

TETS001

TEUI001

DMH_W4AAPPSVR

W4AT001

W4TS001

W4UI001

DMH_W4AECELL

WCAT001

WCTS001

WCUI001

DMH_W4AECLONE

WLAT001

WLTS001

WLUI001

DMH_W4AEDATASOURCE

WOAT001

WOTS001

WOUI001

DMH_W4AEJDBCDRIVER

WRAT001

WRTS001

WRUI001

DMH_W4AEJMSPROVIDR

WPAT001

WPTS001

WPUI001

DMH_W4AEMAILSESSON

WMAT001

WMTS001

WMUI001

DMH_W4AENODE

WNAT001

WNTS001

WNUI001

DMH_W4AESVRGROUP

WEAT001

WETS001

WEUI001

DMH_W4AEURL

WUAT001

WUTS001

WUUI001

DMH_W4AEURLPROVIDR

WXAT001

WXTS001

WXUI001

DMH_W4AEVIRTUALHST

WHAT001

WHTS001

WHUI001

DMH_W4AGENSVR

WGAT001

WGTS001

WGUI001

DMH_W4AJ2CCONNFACT

WJAT001

WJTS001

WJUI001

DMH_W4AJ2CRESADPTR

WSAT001

WSTS001

WSUI001

DMH_W4AJMSCONNFACT

WFAT001

WFTS001

WFUI001

DMH_W4AJMSDEST

WDAT001

WDTS001

WDUI001

DMH_WAR

WAAT001

WATS001

WAUI001

DMH_XML

XMAT001

XMTS001

XMUI001

Table 35. Deleted Constant Types and Strings
Table Code or ID, and Description string
DMH_CMPNT_TYPE
51, Runtime environment
52, JavaProcessor pattern
110, Generic asset
122, Text file
123, Unresolved asset
150, Application server
151, Cell
152, Clone
153, Datasource
154, Generic server
155, J2C connection factory
156, J2C resource adapter
157, JDBC driver
158, JMS connection factory
159, JMS destination
160, JMS provider
161, Mail session
162, Node
163, Server group
164, URL
165, URL provider
166, Virtual host
167, Deployed EAR
168, Deployed WAR
169, Deployed EJB-JAR
170, Deployed EJB
DMH_LANGUAGE ????, unknown
DMH_LIBRARY_TYPE
ACB, IMS ACB Library
DBD, IMS DBD Library
PSB, IMS PSB Library
DMH_QUEUE_TYPE 20, Scan distributed scan root

Database schema changes for v601

Processing of DSN=NULLFILE

The JCL analyzer has been modified to eliminate building unnecessary relationship links between Job step and Data set assets when a dummy data set is specified on a DD statement. As a result, during migration, any Batch jobs in the inventory found to be using DSNAME=NULLFILE and/or DSN=NULLFILE will be queued for analysis again. Post-migration, no row will exist for DMH_DATASET.DATASET_NAME = NULLFILE, and the DMH_DD_CONTAT.DATASET_ID will be zero for those job step DDs formerly pointing to NULLFILE. Post-migration user activity is required to run the Analysis queue processor to analyze the effected Batch jobs again.

Orphaned data in DMH_RUN_UNIT_CNTNT table

For cases where Run units were manually deleted from the inventory (for example, not through the user interface Action link), there arises a condition where rows in the DMH_RUN_UNIT_CNTNT table reference Run unit IDs that no longer exist in the inventory. This causes the Run units with missing source count to be incorrect and misleading. To resolve this condition, the Migration Utility will drop the existing component delete trigger (for example, DMHTRRN1) and re-create it to provide explicit delete behavior for the DMH_RUN_UNIT_CNTNT table. Also during migration, any rows found in the DMH_RUN_UNIT_CNTNT table that do not have a corresponding RUN_UNIT_ID row in the DMH_RUN_UNIT table will be removed.

No user activity is required for this update.

Annotation text corrections

For certain distributed asset types, such as enterprise application, web application, EJB-JAR, J2EE client, and J2EE RAR file, annotation text was sometimes incorrectly related to a Container asset rather than a File asset. During migration, the annotation text is updated so that it is correctly related to the File. If there is already annotation text of the same type related to the File, the one incorrectly related to the Container will be deleted.

No user activity is required for this update.

New and changed project types

The DMH_PROJECT_TYPE table has been updated to collapse all the previous Impact analysis type rows into a single type (for example, 1100), and to introduce a new type for Business rule mining projects (for example, 1200). During migration, any Impact analysis projects found to be using PROJECT_TYPE_ID 1000, 1001, 1002, 1003, 1004, 1039 or 1040 will be updated to use PROJECT_TYPE_ID 1100.

Post-migration user activity is required to manually update any Custom queries that reference the assetType="Project" to instead use assetType="IaProject".

Business rule mining

A User details page has been added to RAA and this page allows activation of a Business rule mining (BRM) project. The User details page is accessed by clicking on the down arrow next to the user name in the upper-right of the browser page, then selecting View my profile.

Once a user has selected a BRM project, the BRM project will be indicated as the active project for the user. To enable this capability, a new attribute (ATTR_TYPE_ID = 726, DESCRIPTION=Business rules mining project to user relationship) has been inserted into the DMH_ATTR_NAME table for use in representing project-user relationships. For example, an entry in the DMH_USER_ATTR table, with ATTR_TYPE_ID = 726 and pointing to the user's DMH_USER row, is created to indicate the particular relationship.

Once an active project has been established by a user, the user can add Business term properties to the BRM project by choosing Actions -> Include in active BRM project from the Business term property details page. The BRM project will then show all the Business rules that use these Business term properties in the Business rule mining project details page. These rules can then be exported to iLog JRules or JRules for COBOL from the Actions menu.

A new relationship table, DMH_BR_RULE_FACT, has been added tot he data model for storing the relationship between a Business rule and the Business term properties used by that rule. This allows RAA to determine which Business rules belong to a given Business rule mining project. During migration, a status indicator row will be inserted into the DMH_SYSTEM_ATTR table to signal an automatic refresh of the DMH_BR_RULE_FACT table. The data refresh will occur during post-migration RAA start-up initialization.

No user activity is required for these updates.

Support for scanning Java source has been removed

During migration, all existing meta data and database objects for Java source scanning will be dropped. Data related to Java bytecode scanning is unaffected.

No user activity is required for this update.

DMH_ANALYSIS_PARMS table

A new table, DMH_ANALYSIS_PARMS, has been added to the data model. This table will augment the DMH_ANALYSIS_QUEUE.ANALYSIS_PARMS column when analysis parameters are too large to fit into the VARCHAR(80) column, such as Container scan load paths on type 12 Analysis queue records.

No user activity is required for this update.

DMH_ASSET_FILE and DMH_ASSET_CONT tables

Two new tables, DMH_ASSET_FILE and DMH_ASSET_CONT, have been added to the data model. These tables will be used to:

No user activity is required for this update.

New database integrity check queue item

A new type 22 Analysis queue record has been introduced to the data model, and is a place-holder for future functionality. The intent is to search for data abnormalities in the database, and report and/or repair them as deemed appropriate.

No user activity is required for this update

Summary of database changes

Table 36. New tables, table spaces, and indexes
Table Table space Indexes
DMH_ANALYSIS_PARMS &dmhtsqul.ANP RDIANP2
DMH_ASSET_CONT &dmhtsqul.ASC DMHASCS1
DMH_ASSET_FILE &dmhtsqul.ASF DMHASFS1
DMH_BR_RULE_FACT &dmhtsqul.BRF
RDIBRF1
RDIBRF2
DMH_PROJECT &dmhtsqul.PRJ RDIPRJ1
DMH_PROJECT_FOCUS &dmhtsqul.PRF RDIPRF1

The following tables list new triggers and trigger packages:

Table 37. Asset inventory count triggers
Triggers Triggering event Table
DMHTRACD
DMHTRACI
DMHTRACU
After delete from
After inset into
After update of
DMH_ASSET_CONT
DMHTRAFD
DMHTRAFI
DMHTRAFU
After delete from
After insert into
After update of
DMH_ASSET_FILE
DMHT103D
DMHT103I
DMHT103U
After delete from
After insert into
After update of
DMH_JBC_FIELD
DMHT104D
DMHT104I
DMHT104U
After delete from
After insert into
After update of
DMH_JBC_METHOD
DMHT105D
DMHT105I
DMHT105U
After delete from
After insert into
After update of
DMH_CPP
DMHT107D
DMHT107I
DMHT107U
After delete from
After insert into
After update of
DMH_EJB
DMHT109D
DMHT109I
DMHT109U
After delete from
After insert into
After update of
DMH_FILTER
DMHT116D
DMHT116I
DMHT116U
After delete from
After insert into
After update of
DMH_TAG
DMHT119D
DMHT119I
DMHT119U
After delete from
After insert into
After update of
DMH_SERVLET
DMHT120D
DMHT120I
DMHT120U
After delete from
After insert into
After update of
DMH_EVENT_LISTENER
DMHT121D
DMHT121I
DMHT121U
After delete from
After insert into
After update of
DMH_TAGLIB_VLDTR
DMHT124D
DMHT124I
DMHT124U
After delete from
After insert into
After update of
DMH_USERINPUTASSET
DMHT127D
DMHT127I
DMHT127U
After delete from
After insert into
After update of
DMH_USERINPUTLINK
DMHT130D
DMHT130I
DMHT130U
After delete from
After insert into
After update of
DMH_WSDL_SERVICE
DMHT131D
DMHT131I
DMHT131U
After delete from
After insert into
After update of
DMH_WSDL_BIND
DMHT132D
DMHT132I
DMHT132U
After delete from
After insert into
After update of
DMH_WSDL_PORTTYPE
DMHT133D
DMHT133I
DMHT133U
After delete from
After insert into
After update of
DMH_WSDL_TYPE
DMHT134D
DMHT134I
DMHT134U
After delete from
After insert into
After update of
DMH_WDSL_PORT
DMHT140D
DMHT140I
DMHT140U
After delete from
After insert into
After update of
DMH_WS_SEI_MAP
Table 38. Asset inventory delete triggers
Trigger Triggering event Table
DMHASFC1
After delete from
DMH_ASSET_CONT
DMHASFD1
After delete from
DMH_ASSET_FILE
DMHEVTD1
After delete from
DMH_EVENT_LISTENER
DMHFLDD1
After delete from
DMH_JBC_FIELD
DMHMETD1
After delete from
DMH_JBC_METHOD
DMHTRQP1
After delete from
DMH_ANALYSIS_QUEUE
DMHWDPD1
After delete from
DMH_WSDL_PORT
DMHWDSD1
After delete from
DMH_WSDL_SERVICE
DMHWDYD1
After delete from
DMH_WSDL_TYPE
DMHTRPT1 After delete from DMH_PROJECT
DMHTRPT2 After delete from DMH_PROJECT
Table 39. New Constant Types and Strings
Table ID and Description string
DMH_ATTR_NAME
300, Impact level
301, Scope indicator
302, Exclude flag
303, Range end line
304, Range start line
726, Business rules mining project to user relationship
740, Initialize DMH_BR_RULE_FACT table
DMH_CMPNT_TYPE
58, Business rule mining project
59, Project focus row
71, User
DMH_INCL_TYPE
10, Import
DMH_PROJECT_TYPE
1200, Business Rule Mining
DMH_QUEUE_TYPE
22, Data integrity check

The Table 40 lists tables have undergone modifications to their definitions, such as added, changed, and deleted columns. The table itself may have Indexes added and changed, and/or now be declared as VOLATILE.

Table 40. Changed tables and indexes
Table New and modified columns Dropped columns New and modified indexes Volatile
DMH_ATTR_NAME
NAME
DMH_BYTECODE_CLASS x
DMH_BR_RULE
DESCRIPTION
DEFINITION
DMH_BT_TERM
DIMENSION
IMPLEMENTATION
DMH_FILE_SYMBOL x
DMH_HIERARCHY_JBC
DMHHBCS1
x
DMH_JBC_FIELD x
DMH_JBC_METHOD x
DMH_LINK_DATA
LINK_TYPE_ID
DMH_LINK_DATA_L
LINK_TYPE_ID
DMH_LINKS_C
DMHLK2S3
x
DMH_LINKS_C_POS
P_CMPNT_TYPE_ID
P_ID
DMHLK5S2
x
DMH_LINKS_H
DMHLK3S3
x
DMH_LINKS_R
DMHLK4S1
DMHLK4S2
DMHLK4S3
DMHLK4S4
x
DMH_LINKS_TC
APPLICATION_ID
LTSI001
LTSI002
LTSI003
LTSI004
x
DMH_PROJECT_TYPE
PROJECT_TYPE_ID
DMH_REFS_JBC x
DMH_REFS_JBF x
DMH_REFS_JBM x
DMH_SYMBOL x
Table 41. Changed Constant Strings
Table ID Was value Is value
DMH_ATTR_NAME
41
87
CALL PLIDLI
USER SCHEMA
CALL PLITDLI
IMS MFS
DMH_CMPNT_TYPE
28
Impact analysis
Impact analysis project
DMH_PROJECT_TYPE
1100
Impact Analysis -
Impact Analysis
DMH_QUEUE_TYPE
17
Delete file
Delete asset
DMH_SYSTEM_ATTR
500
600
601
Table 42. Changed asset inventory delete triggers
Trigger Triggering event Added actions Removed actions
DMHCPPD1
After delete from DMH_CPP
Delete from
DMH_CHANGE_IMPACT
DMH_LINK_SOURCE
DMH_LINK_TARGET
DMHCRWD1
After delete from DMH_SCAN_ROOT
Delete from DMH_ANALYSIS_QUEUE
DMHEJBD1
After delete from DMH_EJB
Delete from
DMH_CHANGE_FOCUS
DMH_CHANGE_IMPACT
DMH_LINK_SOURCE
DMH_LINK_TARGET
DMHFL1D1
After delete from DMH_FILTER
Delete from
DMH_CHANGE_IMPACT
DMH_LINK_SOURCE
DMH_LINK_TARGET
DMHJSPD1
After delete from DMH_JSP
Delete from
DMH_JSP_JAVA_IMPORT
DMH_JSP_JAVA_FLD_T
DMH_JSP_METHOD_USE
DMHSETD1
After delete from DMH_SERVLET
Delete from
DMH_CHANGE_FOCUS
DMH_CHANGE_IMPACT
DMH_LINK_SOURCE
DMH_LINK_TARGET
DMHTAGD1
After delete from DMH_TAG
Delete from
DMH_CHANGE_FOCUS
DMH_CHANGE_IMPACT
DMH_LINK_SOURCE
DMH_LINK_TARGET
DMHTRAP1
After delete from DMH_APPLICATION
Delete from
DMH_LINKS_C
DMH_LINKS_C_POS
DMH_LINKS_H
DMH_LINKS_R
DMHTRLI1
After delete from DMH_CONTAINER
Delete from
DMH_ARCHIVE
DMH_SCAN_ROOT
DMHTRRN1
After delete from DMH_RUN_UNIT
Delete from DMH_RUN_UNIT_CNTNT
DMHVL1D1
After delete from DMH_TAGLIB_VLDTR
Delete from
DMH_CHANGE_IMPACT
DMH_LINK_SOURCE
DMH_LINK_TARGET
DMHWSED1
After delete from DMH_WS_ENDPOINT
Delete from
DMH_CHANGE_IMPACT
DMH_LINK_SOURCE
DMH_LINK_TARGET
DMHTRME1 After delete from DMH_FILE Delete from DMH_ASSET_FILE
Table 43. Deleted tables, table spaces and indexes
Table Table space Indexes
DMH_JAVA
&dmhtsqul.JAV
DMHJAVP1
DMHJAVS1
DMH_JAVA_EXCEPTION
&dmhtsqul.JV1
DMHJV1S1
DMH_JAVA_FIELDS
&dmhtsqul.JV2
DMHJV2S1
DMH_JAVA_IMPORTS
&dmhtsqul.JV3
DMHJV3S1
DMH_JAVA_METHODS
&dmhtsqul.JV4
DMHJV4S1
DMH_JAVA_MTHD_INVK
&dmhtsqul.JV5
DMHJV5S1
DMH_JSP_JAVA_FLD_T
&dmhtsqul.JS7
DMHJS7S1
DMH_JSP_JAVA_IMPRT
&dmhtsqul.JS8
DMHJS8S1
DMH_JSP_METHOD_SE
&dmhtsqul.JS9
DMHJS9S1
Table 44. Deleted auxiliary tables, LOB table spaces and indexes
Auxiliary table Base table LOB table space Index
JAAT002 DMH_JAVA JATS002 JAUI002
Table 45. Deleted constant types and strings
Table ID and Description string
DMH_CMPNT_TYPE
113, Java source class
Table 46. Deleted asset inventory count triggers
Trigger Table
DMHTR28D
DMHTR28I
DMHTR28U
DMHTR38D
DMHTR38I
DMHTR38U
DMH_CMPNT_TYPE

Appendix A. dbutil.sh command reference

The dbutil.sh command is used to create a migration plan, gather information, and run a migration plan. Run the dbutil.sh command from the migration utility directory. Ensure that the environment settings in the /config/User.cfg file are updated and accurate. To receive help for the command, run the command with no parameters.

-createMigratePlan

To build the files that are used for input to the migration utility, use the following command:

dbutil.sh -createMigratePlan targetVersion

-createMigratePlan
targetVersion
Indicates the target version the database is being migrated. Specify either v311, v312, v410, v420, v421, v510, v511, v550, v552, v554, v600, or v601. If a version that is specified is the same as the current version of the database, no migration files are created.

If the database is currently at v312 and the target version is v421, issue the following command:

 dbutil.sh -createMigratePlan v421

and expect output similar to this:

JVM version detected: [#.#.#]

Database information detected...
Version: 312
OS: DB2_ZOS

Database version appears to be at level v312

----------------------------------------------------------------------
Verifying current schema of database as 'v312'
----------------------------------------------------------------------
Current database version verified.

----------------------------------------------------------------------
Creating migration plan from current version 'v312' to target version 'v421'.
----------------------------------------------------------------------
Creating migration plan for version 'v410'
Creating migration plan for version 'v420'
Creating migration plan for version 'v421'
Finished without errors.

The migration utility creates a migrate_plan directory, which contains XML files for each schema level that will be accommodated to get to migrate to the target version. For the above example, the following files are generated:

Migrate_v410.xml
Migrate_v420.xml
Migrate_v421.xml

If the database is at a particular schema level and you attempt to create a migration plan for the same schema level, no Migrate_yversion.xml file is created. For example, a database is currently at the v312 level and the following command is issued:

dbutil.sh -createMigratePlan v312

The following message is received, indicating that a migration plan was not created.

JVM version detected: [#.#.#]

Database information detected...
Version: 312
OS: DB2_ZOS

Database version appears to be at level v312.
Target version 'v312' is less than or equal to the current database version 'v312'.
No plan created.
Finished without errors.
Note: The migration utility deletes the migrate_plan directory and all Migrate_version.xml files on subsequent executions of the -createMigratePlan command.

-getInfo

Use the dbutil.sh -getInfo command to return information about the database.

The dbutil.sh -getInfo has the following format:

dbutil.sh -getInfo [ -args ]

The optional -args simply override what is defined in User.cfg file. If you want to get information about a different Rational Asset Analyzer database without having to edit User.cfg, use the -args parameters to specify the database connection information.

The values for the -args

-dbUrl ?
? = database URL
-user ?
? = user id
-pw ?
? = password
-tschema ? table
? = schema name
-dbDriver ? database
? = driver class name

Below is sample output obtained from issuing the dbutil.sh command with no -args

JVM version detected: [#.#.#]
      Url: jdbc:db2os390:your_DB2_location
      Version: ###
      OS: DB2_ZOS

-runMigratePlan

Use the dbutil.sh -runMigratePlan command to perform the database migration based on a migration plan. The dbutil.sh -runMigratePlan has the following format:

dbutil.sh -runMigratePlan [ task_steps ]

task_steps
This is an optional parameter that specifies the task steps to be executed. One or a range of task steps can be specified. If no Task steps are specified on the command, the entire migration plan is executed. That is, all XML files in the /migrate_plan/Migrate_v*.xml are executed. Do not run Task steps out of sequence. For example, do not run Task step v552.3 until Task steps v552.1 and v552.2 are complete.

At the completion of a -runMigratePlan run (in this case, migrating to v510), you should see the following output immediately after the last echoed SQL statement.

----------------------------------------------------------------------
       'v510.##' - timestamp
       null
      ----------------------------------------------------------------------

      ----------------------------------------------------------------------
       Verifying current schema of database as 'v510'
      ----------------------------------------------------------------------
      Current database version verified.

      ----------------------------------------------------------------------

      **********************************************************************
      NOTICE:  POST-MIGRATION ACTION REQUIRED
      To complete migration for the v5.1.0 level, you should perform the
      following:
      1) Review the list of files identified in the generated file
      Obsolete_Messages.txt and outline a plan to reanalyze these assets.
      Please refer to the releasenotes.txt for further information.
      **********************************************************************

      Finished without errors.

The following examples, show how various specifications of the command can be used to perform step-by-step migration.

-verifySchema

Use the dbutil.sh -verifySchema to verify that a particular Rational Asset Analyzer database contains the expected tables, columns, indexes, and triggers for a given schema level. A list of differences, if any, is displayed and identifies the missing or extra database objects (tables, columns, indexes, and triggers). It is recommended that you investigate and resolve the highlighted differences.

The dbutil.sh -verifySchema has the following format:

dbutil.sh -verifySchema version

where version is any of the following values: v310, v311, v312, v410, v420, v421, v510, v511, v550, v552, v554, v600.

Note: The verification is automatically performed at the start of a -createMigratePlan command and at the end of each completed migration level of a -runMigratePlan command.

The following example shows the output for verifying schema for the v310 level. Issuing the dbutil.sh -verifySchema v310 command produced the following output:

 JVM version detected: [#.#.#]
Verifying database with the following properties:
    targetVersion = 'v310'
    dbUrl = 'jdbc:db2os390:your_DB2_location'
    user = 'userid'
    pw = 'password'
    os = 'zOS'
    tSchema = 'your_schema_name'
    dbDriver = 'your_DB2_driver'
    isDistEnabled = 'true_or_false'
current_date current_time com.ibm.dmh.db.DbInfo
INFO: Database os detected as vendor=IBM, os=DB2_ZOS, version=your_db2_version
Baseline comparison showed 0 differences.
Finished without errors.

The following example shows the output issuing the same command for a database where there are two differences between the current database and the migration level. A message is displayed for each difference. All differences should be resolved before continuing with the migration process.

JVM version detected: [#.#.#]
Verifying database with the following properties:
    targetVersion = 'v312'
    dbUrl = 'jdbc:db2os390:your_DB2_location'
    user = 'userid'
    pw = 'password'
    os = 'zOS'
    tSchema = 'your_schema_name'
    dbDriver = 'your_DB2_driver'
    isDistEnabled = 'true_or_false'
current_date current_time com.ibm.dmh.db.DbInfo
INFO: Database os detected as vendor=IBM, os=DB2_ZOS, version=your_db2_version
Baseline comparison showed 2 differences.
1: Database is missing index 'ASSI001' in table 'DMH_ASSETS'
2: Database is missing trigger 'DMHTRSB7' in table 'DMH_DB2_PROC'

Appendix B. Warnings from implementing a migration plan

An SQL Warning can be produced if there is no data from a table for a given task, such as an empty table, or zero rows. Generally, these types of warnings can be ignored.

The following examples, show the DELETE of row that does not exist followed by an example of an INSERT from or to an empty table.

DELETE FROM DMH.DMH_SYSTEM_ATTR   WHERE SYS_ATTR_ID = 500 STATUS: WARNING  (SQL-100)

INSERT INTO DMH.DMH_FLD_BTCD_NEW     (ASSET_ID, PHYSICAL_ID, ACCESS, FREE_TEXT, NAME, FULL_NAME, TYPE)     SELECT         ASSET_ID, PHYSICAL_ID, ACCESS, CAST(FREE_TEXT as VARCHAR(500)), NAME, FULL_NAME, TYPE     FROM         DMH.DMH_FIELD_BYTECODE STATUS:  WARNING 

Appendix C. Handling migration plan failures

If a Task step in the migration fails, you are prompted to ignore the problem and continue, or to stop the migration utility.

About this task

If a Task step fails, do the following steps before continuing with the migration.

Procedure

  1. Stop the migration utility.
  2. Open the dbutil.log file.
  3. In the log file, find the Task step number for the step that failed, for example, v312.10. A COMMIT would have been performed at the completion of the last successful Task step.
  4. Identify the failing SQL statement and resolve the condition causing the failure.
  5. Pay close attention to which SQL statements, if any, in the current Task step succeeded. Any SQL statements that are executed after the last successful COMMIT call will have been automatically rolled-back.
  6. Identify any remaining SQL statements that should have been run in the failed Task step by locating the Task step in the appropriate /migrate_plan/Migrate_version.xml file. Continue the migration by doing one of the following:

Example

The following is an example of a Task step failure and resolution. The dbutil.log below contained the following error

--------------------------------------------------------------------
[v410.31] - timestamp
Delete some tables
----------------------------------------------------------------------
DROP TABLE DMH.DMH_MYTABLE1
STATUS: OK

COMMIT
STATUS: OK

DROP TABLE DMH.DMH_MYTABLE2
[IBM][CLI Driver][DB2/NT] SQL0204N "DMH.DMH_MY_TABLE2" is an undefined name. SQLSTATE=42704
STATUS: ERROR

Exception occurred while running the last command.
Continue (Rollback occurs if 'N')? [Y(es)/N(o)] N

To determine the steps needed to get around the issue and restart the migration utility do the following steps:

Notices

This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service.