Restoring a migrated configuration
Preliminary Release-- document not yet
indexed.
Look for update in future with index.
This document can be found on the web at: http://www.ibm.com/support/techdocs
Search for document number WP100559 under the
category of "White Papers"
The complete Migration Guide document is attached below, but can
also be found at: http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP100559
Changes made to the V5 directory
When the BBOWMG3D/F job is run, it makes two changes to the source node
configuration structure:
Note:
This information applies only to Deployment Manager nodes or federated
application server nodes. It does not apply to a Standalone, or "BaseApp,"
server node. |
1. It changes serverindex.xml file name to
serverindex.xml_disabled.
Every node has a serverindex.xml file. It is located in the "node level"
directory for that node. Here is an example of where the serverindex.xml
file is for the G5CELL's Deployment Manager node:
Location of the node's serverindex.xml file
1. It copies a Jacl script into that node's /bin directory. That Jacl
script is a simple input file for WSADMIN script interface processing. The
Jacl script is called:
migrationDisablementReversal.jacl
Every node has a /bin directory. It Is located right under the node's
root directory. Here Is an example of where the /bin directory is for the
G5CELL's Deployment Manager node:
Location of the node's /bin directory
The effect of the serverindex.xml being renamed is that it renders the
V5.0 (any release) configuration unable to start. The symptom you will see
if you try is a "serverindex.xml file not found" error message. So
restarting a V5 configuration after it has been migrated involves renaming
that serverindex.xml_disabled file back to just serverindex.xml.
Restoring a migrated configuration -- the manual method
The process is relatively simple:
o Stop V6 servers -- they cannot be up at the same time the
V5.0 (any release) copy of the server is up.
o Unmount V6 configuration HFS -- it is best not to have
this around to confuse matters. You may wish to delete the HFS file system
altogether after unmounting.
o Restore procs -- copy back to PROCLIB the V5.0 (any
release) JCL you backed up earlier.
o Change serverindex.xml file name -- go into
the "node level" directory of each node that was migrated and rename
serverindex.xml_disabled to serverindex.xml
o Start V5.0 (any release) servers -- as you normally
would
Restoring a migrated configuration -- the "automated" method
Also relatively simple:
o Stop V6 servers -- they cannot be up at the same time the
V5.0 (any release) copy of the server is up.
o Unmount V6 configuration HFS -- it is best not to have
this around to confuse matters. You may wish to delete the HFS file system
altogether after unmounting.
o Restore procs -- copy back to PROCLIB the V5.0 (any
release) JCL you backed up earlier.
o Invoke WSADMIN to rename serverindex.xml -- this
involves running the WSADMIN scripting interface and pointing the the
migrationDisablementReversal.jacl file as input:
- From a Telnet or OMVS session, go to the /bin directory of
your V5 node
- Issue the following command:
./wsadmin.sh -f migrationDisablementReversal.jacl -conntype NONE
Notes:
- This is a very simple Jacl script that simply renames the
serverindex.xml back to its proper name.
- It is imporant to be in the proper /bin directory. That
Jacl script will operate on the "node level" copy of serverindex.xml in
whatever node's /bin directory you invoke the WSDAMIN shell script. Pay
attention.
- It is important to have the -conntype NONE designation on
that
|
o Start V5.0 (any release) servers -- as you normally
would
Is there a significant different between the two methods
No. One is a manual rename of the serverindex.xml_disable, one is
"automated."
When the BBOWMG3* job fails
The BBOWMG3* job has 13 steps, each of which much run with RC=0 for the
migration to be successful. What happens if the job runs part-way and
fails, leaving an incomplete migration? It is relatively simple. Do the
following:
o Under the V6 configuration HFS mount point, delete the node's "root"
(or "home") directory and all subdirectories.
Important:
Pay close attention to where you are and what you delete when you
do this step. Make sure you are the V6 HFS and not the V5, and make sure
you delete the node you wish to delete and not some other already-migrated
and working node. |
For instance, a Deployment Manager node's home will be /DeploymentManager.
For the G5CELL the application server node on SYSC was /AppServerNodeC.
Note: |
The point is this: you do not have to
delete the whole HFS, just the failed node's home. |
o Fix whatever caused BBOWMG3* to fail.
o Resubmit the BBOWMG3* job.
WP100559 - Migrating a Configuration from V5.0 (any release) to V6
|