Migrating a Network Deployment Configuration from V5.1 to V6
A chronicle of an actual migration
This document can be found on the web at: http://www.ibm.com/support/techdocs
Search for document number WP100559 under the category of
"White Papers"
The complete Migration Guide document isattached
below, but can also be found at:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP100559
Version Date: Sat, Aug 6, 2005
Table of Contents
Initial
Information and Disclaimers |
4
|
About
this document |
4
|
How
this document is constructed |
4
|
Fundamental
assumption underlying this entire document |
4
|
An
Overview of the Version 6 Migration Process |
5
|
Essential
concepts of migration |
5
|
Flow of
migration process |
6
|
Differences
from V5.1 migration process |
6
|
|
6
|
|
6
|
|
7
|
|
7
|
|
7
|
|
7
|
|
7
|
|
8
|
|
9
|
|
9
|
|
10
|
|
11
|
Questions and
Answers |
11
|
|
11
|
|
11
|
|
12
|
|
12
|
|
12
|
|
12
|
|
12
|
|
12
|
|
12
|
|
13
|
|
13
|
|
13
|
|
13
|
|
13
|
|
13
|
|
13
|
|
13
|
The "G5CELL" Network Deployment Configuration |
15
|
Diagram of the cell layout |
15
|
Description of configuration |
15
|
Cell information |
16
|
Deployment Manager Node |
16
|
Application server node on SYSC |
16
|
Application server node on SYSD |
17
|
Plan for migrated configuration |
17
|
Planned sequence of node migration |
17
|
Migrated the Deployment Manager Node on SYSC |
18
|
Preliminary work |
18
|
Invoked ISPF dialogs and customized migration jobs |
19
|
Reviewed instruction member BBOMDINS in CNTL data set |
25
|
Important note -- please read |
26
|
Stopped Deployment Manager |
26
|
Ran customized jobs |
26
|
The BBOWMDMT job
|
26
|
The BBOMDCP job
|
27
|
The BBOWMG3D job
|
28
|
Performed post-migration RACF work |
28
|
Stopped the Daemon on SYSC |
29
|
Started Deployment Manager |
30
|
Started servers in G5NODEC on SYSC |
31
|
Status of the cell at this point in time |
31
|
What the V6 Administrative Console showed for the nodes on
SYSC and SYSD |
31
|
Status of the old V5.1 Deployment Manager
configuration |
31
|
The gathering storm -- the problem of shared procs,
re-using proc names and STEPLIB |
32
|
The Daemon dilemma
|
32
|
If we had to do PRR processing for
servers on SYSD ...
|
33
|
On the horizon: the shared application
server proc dilemma
|
34
|
Wrap-up: our "gathering storm" may not be
your problem
|
34
|
Migrated the Application Server Node on SYSD |
35
|
Why SYSD next and not SYSC? |
35
|
Preliminary work |
35
|
Invoked ISPF dialogs and customized migration jobs |
36
|
Reviewed instruction member BBOMMINS in CNTL data set |
43
|
Important note -- please read |
43
|
Made sure Deployment Manager was up and running |
43
|
Daemon server and all G5NODED servers shut down |
44
|
Ran customized jobs |
44
|
The BBOWMMMT job
|
44
|
The BBOMMCP job
|
44
|
The BBOWMG1F job
|
45
|
The BBOWMG2F job
|
45
|
The BBOWMG3F job
|
45
|
Performed post-migration RACF work |
46
|
Started servers on SYSD |
46
|
Node Agent
|
46
|
What the V6 Administrative Console showed
for the nodes on SYSC and SYSD
|
47
|
New ports created for V6 application
server
|
47
|
Application server G5SR02D
|
49
|
The SYSC application server JCL procedure dilemma at this
point |
49
|
Migrated the Application Server Node on SYSC |
50
|
Preliminary work |
50
|
Invoked ISPF dialogs and customized migration jobs |
50
|
Reviewed instruction member BBOMMINS in CNTL data set |
57
|
Important note -- please read |
58
|
Made sure Deployment Manager was up and running |
58
|
Stopped all servers in the G5NODEC node |
58
|
Ran customized jobs |
59
|
The BBOWMMMT job
|
59
|
The BBOMMCP job
|
59
|
The BBOWMG1F job
|
59
|
The BBOWMG2F job
|
59
|
The BBOWMG3F job
|
59
|
Post-migration RACF work |
60
|
Started servers on SYSC |
60
|
Node Agent
|
60
|
New ports created for V6 application
server
|
61
|
Application server G5SR01C
|
61
|
Status at this point |
61
|
Migrated a Base Application Server Node |
62
|
Picture of our Base Application Server node |
62
|
Preliminary work |
62
|
Invoked ISPF dialogs and customized migration jobs |
63
|
Reviewed instruction member BBOMBINS in CNTL data set |
69
|
Important note -- please read |
70
|
Stopped server and Daemon |
70
|
Ran customized jobs |
70
|
The BBOWMBMT job
|
70
|
The BBOMBCP job
|
71
|
The BBOWMG1B job
|
71
|
The BBOWMG2B job
|
71
|
The BBOWMG3B job
|
71
|
Post-migration RACF work |
71
|
Started Standalone server |
72
|
Mapped new ports created for V6 application server |
72
|
Stopped server and restarted (to use re-mapped ports) |
72
|
Other Information |
73
|
Known issues |
73
|
Falling back to V5 (or recovering a migrated node) |
73
|
Changes made to the V5
directory
|
73
|
Restoring a migrated configuration -- the
manual method
|
74
|
Restoring a migrated configuration -- the
"automated" method
|
74
|
Is there a significant different between
the two methods
|
75
|
When the BBOWMG3* job fails |
75
|
Document Change History |
76
|
Initial Information
and Disclaimers
About this document
This document is not intended to provide an exhaustive reference
for all things related to the migration of nodes from V5.0 (any release)
to V6.0. The purpose of this document is to familiarize the reader with
the process and provide enough insight so the process of migration can be
approached confidently.
But there are things we do not go into here, such as how to migrate an
IJP configuration to the new V6 architecture. Perhaps at some future date
those topics will be included in this document.
The WebSphere Application Server for z/OS Version 6 Information Center
has a section on migration and should be considered the definitive source
for migration information.
How this
document is constructed
The method this document employs to do this is a chronicling of the
steps taken to migrate an actual configuration at the IBM Washington
Systems Center. That configuration was a Network Deployment configuration
at the V5.1 W510207 level of maintenance. It consisted of two application
server nodes spanning two MVS images in a Sysplex.
This method of illustrating a migration is useful for two reasons:
1. It provides a way for you to see "real" steps performed against a
"real" configuration. From that you can map what you read here to your
environment and understand things better.
2. It provides a handy framework in which to provide a running commentary
of things to be aware of and things to watch out for.
Fundamental
assumption underlying this entire document
We assumed the WebSphere Application Server for z/OS Version 6 code was
properly installed on the system, including all the steps required to
allow this code to operate on the systems.
In other words, we assumed the newly migrated servers had what they
needed to start, provided we had done the migration properly.
WP100559 - Migrating a Configuration from V5.x to
V6
© 2005, IBM Americas Advanced Technical Support
Washington Systems Center, Gaithersburg, MD |
- 10 -
|
Section: Overview
Version Date: Sunday, August 28, 2005
|
An Overview of
the Version 6 Migration Process
The migration process provided to go from V5.0 (all releases) -to-V6 is
much improved over the one provided to go from V5.0-to-V5.1. So if you're
harboring painful welts from that experience, then cheer up: this will be
relatively easy.
Essential concepts
of migration
At the very highest level this is the same as it was for the
V5.0-to-V5.1 migration; that is, an existing configuration in an HFS is
copied out, transformed, and written into a new HFS:
High level view of what migration entails
Further, the migration process is a node-by-node procedure, just like
it was before:
Migration utilities must be run against each node in your
configuration, including Deployment Manager
Flow of migration process
Here is a snapshot of what the process is like to migrate a multi-node
Network Deployment configuration:
1. Take an inventory of your existing environment so you have a
feel for things like mount points, userids and groups, node directory
roots and JCL start procedures
The information shown under "The "G5CELL" Network Deployment
Configuration" starting on page 15 gives you a good idea of what kind of
information you will need.
2. Back up your "source" configuration HFS
The migration process alters that configuration HFS. It is a minor
alteration, but one nevertheless. So "just to be sure" you should backup
the source configuration HFS using your preferred file backup tool.
Note:
See "Falling back to V5 (or recovering a migrated node)" on page 73 for
more on the changes made to the source configuration. |
1. Run through the ISPF customization dialogs that are provided
with V6. They will generate customized migration jobs for the node
ISPF panels capture key information and then generate customized
migration jobs
We illustrate that panel-by-panel under each of the node sections in
this document.
1. Migrate the node by running the customized jobs
This involves submitted the jobs and checking for RC=0.
2. Perform post-migration work
This involves creating a few additional RACF profiles.
Differences from V5.1 migration process
While many of the concepts are the same, some of the specifics of the
V6 migration process are different. Let us cover them here:
One primary migration utility
In the V5.0-to-V5.1 migration process there were five migration
utilities: BBOXMIG1 through BBOXMIG5. With the V6 migration process, that
has been reduced so that one job is responsible for doing the
heavy-lifting of migration.
Note:
But that is not to say that only one job is run. There are other
jobs, but they do more mundane things like create and mount the new HFS,
or copy new JCL procedures into PROCLIB. With V6 there's one primary
migration job that does what used to take two or three. |
ISPF Dialog creates customized migration jobs
The V5.0-to-V5.1 migration process came packaged as a series of JCL
jobs in a PDS that you had to hand-modify. With V6 there is an option in
the ISPF Customization Dialogs that will create those migration jobs for
you, with all the relevant information updated in all the right spots.
Simpler ISPF panels to define configuration being migrated
The V5.0-to-V5.1 migration process required you to run through the
standard customization dialogs to build a "skeleton" configuration. That
"skeleton" was used as input to the migration process. It was somewhat
confusing because much of the information entered into the "skeleton"
configuration was "dummy information" -- things not needed by the
migration process but required by the ISPF dialogs.
With V6, the ISPF panels are much simpler, capturing only that
information the migration utilities really require. So the process is far
less confusing then it was for V5.0-to-V5.1.
Version 6 information asked for in ISPF panels
These utilities will convert a set of V5.0 (all releases) servers into
V6.0 servers. The architecture of the V6 server is different from V5, most
notably the "High Availability Manager" function. The ISPF panels that are
used to generate the migration jobs will ask you for the host value
assigned to this new function.
We will highlight this during the illustration of the panels used to
migrate the "G5CELL."
Additional V6 post-migration security work needed
V6 requires a few more security profiles than did V5. That means that
after migrating a node, some additional RACF (or other SAF interface
security product) work is required. Not much, but a little.
We will highlight this at the appropriate spots in the illustration of
the migration of the "G5CELL" used for this white paper.
New JCL start procedures created during migration job
generation
The V6 migration process will ask for names to be used for the JCL
start procedures for server controllers of the migrated node. It will then
generate new JCL for you and copy that JCL into your PROCLIB. The
implication is that new JCL procedures are required for V6, and indeed the
new JCL procedures do look quite a bit different. But there is a subtlety
here that we will illustrate in this document:
- Yes, new JCL start procedures are needed. At a minimum the
SET ROOT= value that points to the configuration mount point must be
different. If STEPLIB statements are used then those also need to be
different from V5.
- But the names of the JCL start procedures do not
need to be different. Keeping the same names as used with V5 provides a
key benefit: you will not need to create new STARTED profiles. New
controller start
What this means is that you will need to backup your V5 start procedures
before running the job that copies in the new JCL.
Again, we will illustrate this at the appropriate spot in the
document.
Mixed-version nodes within a cell on the same MVS image permitted
(with limitations)
One of the nice features of V6 is that it permits a Deployment Manager
running at the V6 level of code to manage nodes still at the V5 level of
code. What is more, a V6 Daemon is capable of supporting -- on the same
MVS image -- V5.1 servers in a node that is part of that Daemon's
cell.
This provides considerable flexibility in the migration process. Back
when V5.1 came out, the migration process there required that two or more
nodes from a cell on the same MVS image be migrated "at the same time" (in
other words, one immediately after the other). What this meant was that if
you had an application server node on the same MVS image as its Deployment
Manager, you had to stop both and migrate them both before you could
restart either.
The issue was the Daemon server. Two or more nodes from the same cell
on the same MVS image share the same Daemon server. If the Daemon server
is running at the higher level of code, then the not-yet-migrated node
(running at the lower level of code) would have to be compatible with the
Daemon. In Version 5.1 this was not the case: a Daemon running at
V5.1 was not compatible with servers in a node running at V5.0.
Notes:
We are being somewhat precise with our language here. Let us lock down
some things:
- If the two nodes on the same MVS image were in
different cells, it would have been okay. That is because they
would have been supported by different Daemon servers. The issue
was when two nodes from the same cell on the same MVS image. When
that was the scenario, they shared the same Daemon.
- If the two nodes were on different MVS images, that too
would have been okay. Again, that would imply different Daemon servers.
The white paper WP100441 clearly outlined how a Network Deployment
configuration that spanned two or more MVS images could be
non-disruptively migrated from V5.0-to-V5.1.
|
But with Version 6 you had additional flexibility. When migrating from
V5.1-to-V6, the Daemon code is compatible with both V6 and V5.1, so
a configuration like this is possible:
Two nodes from the same cell on the same MVS image at different version
levels
But there are limitations:
- The V5 node must be at V5.1, and not V5.0.
(any modification)
- If one of the nodes is the Deployment Manager node, it
must be at the higher level
Message:
If your V5 configuration is still at V5.0. (any modification), then
plan on migrating all nodes from a cell on the same MVS image immediately
after one another. And do the Deployment Manager first, then the
application server node because the DMGR needs to be up and running to
migrate an application server node.
But if your nodes are at V5.1 (and W510207 at a minimum), then you have
the flexibility to have a V6 node and V5.1 node from the same cell coexist
on the same MVS image.
|
Other V6 "mixed node" information
We know this topic is likely to generate some interest and discussion.
Let us try to get out front of that with a little information here.
Coexistence of separate V5 and V6 cells on same MVS image
permitted
There is nothing about V6 that would prevent two different cells -- one
at V5 and one at V6 -- from coexisting on the same MVS image (or the same
Sysplex for that matter, but on the same MVS image is the more challenging
test):
Version 5.0 (any release) and Version 6.0 cells may coexist on the same
MVS image
Notes:
There are some limitations to this, all fairly common-sense things:
- Both V5 and V6 modules can't be in LPA/LNKLST at the same
time
- You cannot share JCL start procedures between the V5 cell
and the V6 cell when STEPLIB statements are in the JCL
As long as you provide essential separation between the two -- separate
mount point, separate HFS, separate JCL start procedures -- the two will
happily coexist. |
Coexistence of V6 with V5.1 nodes in the same cell or same MVS image
permitted
As we stated earlier, a V5.1 node has an increased ability to coexist
with V6.0 servers in the same cell or even in the same cell on the same
MVS image:
Mixed V5.1/V6.0 cells possible -- same MVS image or other MVS image
Notes:
There are some limitations to this,:
- The Deployment Manager must be at V6.0 to manage V5.1
nodes. The reverse is not permitted -- a V5.1 DMGR can't manage a V6.0
node.
- Both V5 and V6 modules cannot be in LPA/LNKLST on the same
MVS image at the same time. At a minimum one must be STEPLIBed
- A V6.0 node cannot share JCL start procedures with a V5.1
node when STEPLIB statements are used in the JCL.
In addition to this, there are limitations to what kind of management a
V6.0 DMGR can perform upon a V5.1 node. See "Some limitations on what you
can do with mixed nodes" on page 11 for more. |
Coexistence of V6 with V5.0 nodes in the same cell on same MVS not
permitted
Here we are a bit more limited. A V6.0 and a V5.0 node from the same
cell cannot coexist on the same MVS image. The Daemon server -- which must
be at the level of code equal to the highest code-level node for that cell
on that MVS image -- is not compatible with a V5.0 node. (A V5.1 node yes,
a V5.0 node no.)
Cannot have a V5.0 node and a V6.0 node in the same cell on the same
MVS image
A cell with mixed V6.0 nodes and V5.0 nodes is permitted, provided they
are not mixed on the same MVS image. On another MVS image, where the V5.0
servers can be supported by a V5.0 Daemon, is okay.
Notes:
There are some limitations to this,:
- The Deployment Manager must be at V6.0 to manage V5.0
nodes. The reverse is not permitted -- a V5.0 DMGR can't manage a V6.0
node.
- A V6.0 node cannot share JCL start procedures with a V5.1
node when STEPLIB statements are used in the JCL.
In addition to this, there are limitations to what kind of management a
V6.0 DMGR can perform upon a V5.1 node. See "Some limitations on what you
can do with mixed nodes" on page 11 for more. |
Some limitations on what you can do with mixed nodes
We have established that mixed node cells are possible. We have
established that the Deployment Manager has to be at V6.0 to manage
"down-level" nodes. Here a list of things you cannot do in
mixed-node cell environment:
- You cannot federate a V5.0 (any release) node into a cell
managed by a V6.0 Deployment Manager. The federation process will detect
the mismatch and prevent the federation.
- If you wanted to join a V5 "Base Application Server node"
into the V6.0 cell, you would have to first migrate the "BaseApp node"
(now called a "Standalone Server" in V6.0 language) to V6.0, and then
federate it.
- You cannot add servers to a managed down-level node. If
you want to add servers to the node you would have to first migrate the
node up to V6.0 and then add servers.
There may be more, but you get the point -- free and unfettered
management of a down-level node is restricted.
That said, some common things can be done:
- You can install applications
- You can change settings like short names and ports
- You can start and stop the server from the Administrative
Console
- You can start and stop applications from the
Administrative Console
Questions and Answers
Does it
matter if I am at V5.0 or V5.1?
At a high level, no. The process is essentially the same for both. But
when it comes to running nodes at different levels, it does matter. See
"Mixed-version nodes within a cell on the same MVS image permitted (with
limitations)" on page 7 for more.
Is there a
minimum level of maintenance the V5 nodes should be at?
Yes:
Version 5.0. (any modification) node |
W502025 |
Version 5.1. (any modification) node |
W510207 |
If your nodes are not at the appropriate level of maintenance, apply the
maintenance and make sure at least one server from the node runs
applyPTF.sh so the node's configuration is brought up to the minimum
maintenance required for migration.
Can a V4
configuration be migrated to V6?
No.
If my
cell is at V5.0, should I migrate to V5.1 before migrating to V6?
The advantage to being at V5.1 during migration is an additional degree
of flexibility with regard to how the nodes are migrated. It centers
around the Daemon servers. A Daemon migrated up to the V6 level of code is
capable of managing a V5.1 node from the same cell on the same MVS image.
But not a V5.0 node. Therefore, if you have two nodes from a cell on the
same MVS image and they are at V5.0, they will need to be migrated one
right after the other (or "at the same time," which means the second one
migrated immediately after the first). If the two nodes from a cell on the
same MVS image are at V5.1, then the second one can be migrated at your
leisure -- the V5.1 node can be started and will happily coexist with a
V6.0 Daemon.
That said, migrating from V5.0 to V5.1 is not a trivial undertaking.
Migrating from V5.0 to V6 directly is probably the best way to go. But
that's something you must decide.
Is there a
proper sequence for migrating the nodes of a cell?
Yes. Always migrate the Deployment Manager node first. Version 6 is
capable of managing "down" to V5 nodes, but a V5 DMGR can't manage
"up."
After the Deployment Manager node is migrated, other nodes may be
migrated as you please.
Note:
See "Mixed-version nodes within a cell on the same MVS image permitted
(with limitations)" on page 7 for a discussion of a key restriction to
this. |
What about a
"Base Application Server" node?
The process is very similar to that of a Network Deployment node. See
"Migrated a Base Application Server Node" starting on page 62.
Do the
servers in the node have to be down when it is migrated?
Yes. In order to migrate a configuration the servers in the node must
be stopped.
Do all the
servers in my cell have to be stopped when migrating?
No. In fact when application server nodes are migrated the Deployment
Manager must be up and running.
It is quite possible to have some nodes in a cell migrated to V6 while
other nodes are still at V5. This provides the ability to provide a
"non-disruptive" migration.
Note: See "Mixed-version nodes within a cell on the same MVS image
permitted (with limitations)" on page 7 for a discussion of a key
restriction to this.
Further, a V6 Deployment Manager is capable of existing with V5.1 nodes
for quite some time, so there is no need to feel as if you have to rush a
migration to fit within a maintenance window. The V6 DMGR will know that a
node it is managing is at the lower level of code, and when configuration
changes are made to the node the V6 DMGR will make sure those changes have
the V5 format.
Is it possible
to migrate a server at a time?
No. The migration process is a node-by-node process.
Is it possible
to migrate a configuration when security is enabled?
Yes. The migration ISPF customization panels will ask for the
Application Server Administrator ID and password when an application
server node is being migrated. This is needed so the migration utility can
connect to the running V6 Deployment Manager and synchronize.
Re-use my
existing V5 security profiles, or create new?
By "security profiles" we mean things like (in the language of RACF) --
userids and groups, STARTED profiles, SERVER and CBIND profiles, keyrings
and certificates.
The answer is to re-use the same profiles. The relationship between the
configuration and the underlying security profiles is so tight that trying
to map the migrated configuration to a new set of profiles would be
extremely challenging. It is far better to use the same profiles.
Perform
"post-migration security work" even if global security is not
enabled?
Yes. The "post-migration security work" mentioned earlier involves
creating a few new profiles needed by a V6 server, regardless of whether
global security is enabled or disabled.
Do all
servers in a node get migrated when node is migrated?
Yes. The migration utility is capable of determining what servers
reside in a node and it will migrate all the servers in the node.
Are the
applications in the servers migrated as well?
Yes. The migration utility will make certain the applications installed
in the servers are copied over.
Note:
WebSphere Application Server for z/OS Version 6 has many new
application-oriented features, including support for J2EE 1.4. You
applications will still run in the new V6 environment, but they will not
take advantage of some of these new functions. The migration utilities
will not modify the applications to exploit anything new in V6. |
What about TCP ports?
The migration will carry over the ports you had assigned to your
servers in V5. There is an aspect of this you should know about: a V6
application server has six new ports -- above and beyond the V5
number of six -- making the total number of ports per application server
now 12. The migration process will carry over the six V5 ports, and
will assign default values for the six new ports.
You will probably want to re-map these new ports so they adhere to your
port allocation scheme. We cover this issue under "New ports created for
V6 application server" on page 47.
May different
people perform migration on different nodes at the same time?
It would be best not to attempt this. The BBOWMG3* job will try to use
the /tmp/migrate directory for work space, and it will attempt to copy the
file bbomigrt2.sh into the directory. So, depending on whether that file
already exists, and what the permissions are on them, a migrate job will
either operate or fail. Failure will be indicated by a RC=256 on the first
step ("SETUP") of the BBOWMG3* job.
In theory it could work. But it would be safest to assume the
migrations be run sequentially and not concurrently.
Need to do 'PRR'
processing like we did in V5.0-to-V5.1 migration?
If you have XA connectors installed in your application servers, then
the answer is yes. If you do not have XA connectors installed, then the
answer is no.
When migrating an application server node, two migration utilities are
generated: BBOWMG1F and BBOWMG2F. They are what perform the PRR ("Peer
Resource Recovery") processing for the application servers in a node.
Running those jobs even if you do not have XA connectors installed will
not hurt anything. So if you are not sure, then run the jobs.
Download the
manual to read more:
|