There are several planning decisions that you need to make
when setting up a WebSphere® Application Server
for z/OS® configuration file system.
Cell, node, and server settings as well as deployed applications
are stored in the WebSphere Application Server
for z/OS configuration file system. You can use
a zSeries file system (ZFS) or hierarchical file system (HFS) for
the configuration file system.
Tip: Beginning with WebSphere Application Server for z/OS Version
7.0, the SBBOLOAD and SBBOLD2 datasets no longer exist. This is because
the load modules are now in the product file system. If you want
to switch a configuration from using load modules in the product file
system to using load modules in a dataset, you can use the tool described
in
switchModules command. Beginning with WebSphere
Application Server for z/OS Version 8.0, the
server_dlls_in_hfs environment
variable must also be set to
0 for the server to
use the DLLs that have been put into a dataset that is in STEPLIB,
LPA, or link list. In order for the daemon to pick up the DLLs,
WAS_DAEMON_ONLY_server_dlls_in_hfs should
be set at the cell level.
Each node needs a home directory
Every WebSphere Application Server for z/OS node--whether
a standalone application server, deployment manager, managed application
server node, or location service daemon--requires a read/write home
directory, sometimes referred to as its WAS_HOME.
This is the
structure of a WebSphere Application Server for z/OS configuration
file system, mounted at /WebSphere/V8R0. It contains a WebSphere Application
Server home directory for a single application server named BBOS001,
with a cell and a node both named SYSA.
/WebSphere/V8R0
/AppServer
/bin
/classes
/java
/lib
/logs
/profiles
/default -> this is the profile_root directory
/temp
...
/Daemon
/config
/SYSA
SYSA.SYSA.BBODMNB -> /WebSphere/V8R0/Daemon/config/SYSA/SYSA/BBODMNB
SYSA.SYSA.BBOS001 ->
/WebSphere/V8R0/AppServer/profiles/default/config/cells/SYSA/nodes/SYSA
/servers/server1
SYSA.SYSA.BBOS001.HOME -> /WebSphere/V8R0/AppServer
The WebSphere Application Server home directory
for BBOS001 is named AppServer. It contains directories with complete
configuration information for the SYSA node and the BBOS001 server.
The
/Daemon directory
contains configuration information for location service daemons defined
to nodes in this configuration file system.
Note: The /Daemon/config subdirectory
is subdivided by cell name. If the cells have different short names,
the location service daemon information for each is kept separate.
The
daemon home directory has the fixed WebSphere Application
Server home name
Daemon.
Symbolic links are used to access startup parameters
In
addition to the WebSphere Application Server home directories
themselves, the configuration file system contains a multipart symbolic
link for each server that points to the startup parameters for the
server. The symbolic link is named cell_short_name.node_short_name.server_short_name.
The
sample configuration file system above contains a symbolic link SYSA.SYSA.BBODMNB
to start the location service daemon and a symbolic link SYSA.SYSA.BBOS001
to start the BBOS001 application server. The second symbolic link
is specified in the ENV parameter on the START command
when the server or location service daemon is started from the MVS console:
START procname,JOBNAME=BBOS001,ENV=SYSA.SYSA.BBOS001
Each
symbolic link points to the subdirectory where the server's was.env file
resides. This file contains the information required to start the
server.
Note: During post-installation processing, described
below, the server JCL needs to specify the WebSphere Application
Server home directory itself, rather than the location of the was.env file.
This is the purpose of the SYSA.SYSA.BBOS001.HOME symbolic link shown
above.
Sharing the configuration file system between cells
Two
or more WebSphere Application Server for z/OS cells
(standalone application server, Network Deployment, or both) can share
a WebSphere Application Server for z/OS configuration
file system, provided the following conditions are met:
- All cells using the configuration file system must be set up using
the same common groups and users. In particular, each must have the
same administrator user ID and configuration group.
- The cells must have distinct cell short names.
- Each node must have its own WAS_HOME directory that is not shared
with any other node or cell.
As noted above, you can share the daemon home directory
(/Daemon)
between cells, as it has subdirectories farther down for each cell
in the configuration file system.
Note: Be aware that sharing a configuration
file system between cells increases the likelihood that problems with
one cell might cause problems with other cells in the same configurations
file system.
Sharing the configuration file system between systems
Two
or more z/OS systems can share a configuration file
system, provided the z/OS systems have a shared file
system and the configuration file system is mounted R/W. All updates
are made by the z/OS system that "owns" the mount point. For
a Network Deployment cell, this is generally the z/OS system
on which the cell deployment manager is configured.
Choosing a WebSphere Application Server
for z/OS configuration file system mount point
The
choice of WebSphere Application Server for z/OS configuration
file system mount points depends on your z/OS system
layout, the nature of the application serving environments involved,
and the relative importance of several factors: ease of setup, ease
of maintenance, performance, recoverability, and the need for continuous
availability.
- In a single z/OS system:
If you run WebSphere Application
Server for z/OS on a single z/OS system,
you have a wide range of choices for a z/OS configuration
file system mount point. You might want to put several standalone
application servers in a single configuration file system with a separate
configuration file system for a production server or for a Network
Deployment cell. Using separate configuration file system datasets
improves performance and reliability, while using a shared configuration
file system reduces the number of application server cataloged procedures
you need.
You might have one configuration file system with
your development, test and quality assurance servers, all in the same
common groups and uses as in the following example:
/WebSphere/V8_test
/DevServer - home to standalone server DVCELL, with server DVSR01A
/TestServer1 - home to standalone server cell T1CELL, with server T1SR01A
/TestServer2 - home to standalone server cell T2CELL, with server T2SR01A
/QAServer - home to Network Deployment cell QACELL, with deployment
manager QADMGR and server QVSR01A
and a separate configuration
file system for your production cell:
/WebSphere/V8_prod
/CorpServer1 - home to Network Deployment cell CSCELL, with deployment
manager CSDMGR and server CSSR01A
- In a multisystem z/OS sysplex with no shared file
system:
In a multisystem sysplex with no shared file system, each z/OS system
must have its own configuration file system datasets. For standalone
application servers and for Network Deployment cells that do not span
systems, the options are the same as for a single z/OS system.
- For Network Deployment cells that span systems:
Here you have
two options:
- You can use a different mount point for the cell's configuration
file system datasets on each system. This allows you to move nodes
easily between systems (if a system becomes inoperative or is being
upgraded for example), since each mount point is unused on the other
systems in the sysplex, allowing you to mount the failed system's
configuration file system datasets on an alternate system in the sysplex.
On
system LPAR1, for example, you might have a configuration file system
for one part of a cell:
/var/WebSphere/V8config1
/DeploymentManager - home to deployment manager F1DMGR in cell F1CELL
/AppServer1 - home to node F1NODEA and servers F1SR01A and F1SR02A
with
a second configuration file system on LPAR2:
/var/WebSphere/V8config2
/AppServer2 - home to node F1NODEB and servers F1SR02B (clustered)
and F1SR03B
This setup has the advantage that you can
move the deployment manager and node F1NODEA to LPAR2 or move node
F1NODEB to LPAR1. The disadvantage of this configuration is that F1NODEA
and F1NODEB will require separate sets of cataloged procedures.
- Or you can use the same mount point for all configuration file
system datasets in a particular cell. This allows you to use common
cataloged procedures and make the systems look very similar.
Using
the same cell setup as above, node LPAR1 would have one configuration
file system:
/var/WebSphere/V8F1
/DeploymentManager - home to deployment manager F1DMGR in cell F1CELL
/AppServer1 - home to node F1NODEA and servers F1SR01A and F1SR02A
and
LPAR2 would have a separate file system at the same mount point:
/var/WebSphere/V8F1
/AppServer2 - home to node F1NODEB and servers F1SR02B (clustered)
and F1SR03B
However, relocation of either
LPAR's node(s) to the other system would require merging a copy of
one configuration file system into the other.
- In a multisystem z/OS sysplex with a shared file
system:
If your sysplex has a shared hierarchical file system, you
can simply mount a large configuration file system for the entire
cell. When using the Profile Management Tool or the zpmt command,
specify the common configuration file system mount point on each system.
As noted above, you should update the configuration file system from
the z/OS system hosting the deployment manager.
Performance will depend on the frequency of configuration changes,
and ensure you devote extra effort to tuning if this option is chosen.
Alternatively,
you can mount a separate configuration file system on each system,
perhaps using the system-specific file system mounted at
/&SYSNAME on
each system:
/LPAR1/WebSphere/V8F1
/DeploymentManager - home to deployment manager F1DMGR in cell F1CELL
/AppServer1 - home to node F1NODEA and servers F1SR01A and F1SR02A
/LPAR2/WebSphere/V8F1
/AppServer2 - home to node F1NODEB and servers F1SR02B (clustered)
and F1SR03B
Each system (LPAR1 and LPAR2) mounts its
own configuration file system on its system-specific mount point.
When using the Profile Management Tool or the
zpmt command,
specify the following:
- /LPAR1/WebSphere/V8F1 on LPAR1
- /LPAR2/WebSphere/V8F1 on LPAR2
Performance is better with this option than with a shared sysplex,
and, depending on choice of mount point, it might be possible to mount
a configuration file system temporarily on the other LPAR if the original
owner is down. You can make cataloged procedures system-specific or
use &SYSNAME to select the configuration file system mount point.
If
you really want to use the same apparent mount point for all configuration
file system datasets, you can use symbolic links to redirect a common
mount point to a different file system on each system:
- ln -s $SYSNAME/WebSphere WebSphere
- Mount LPAR1's configuration file system at /LPAR1/WebSphere/V8F1.
- Mount LPAR2's configuration file system at /LPAR2/WebSphere/V8F1.
If this is done correctly, you can specify a configuration mount
point of /WebSphere/V8F1 for each system in the Profile Management
Tool or the
zpmt command and still enjoy the benefits
of system-specific customization file system datasets. However, when
this setup is used, it is not possible to easily move configuration
file system datasets from one system to another. All nodes expect
to find their data in /WebSphere/V8F1, and you can mount only one
configuration file system at this mount point on each system.
- Recommendations:
- On a single z/OS system, create a read/write file system at /wasv8config and
use the Profile Management Tool defaults, mounting each configuration
file system at /wasv8config/cell_name/node_name.
- On a multisystem sysplex with no shared file system, follow the
recommendations above for a single z/OS system.
This will allow you to use common cataloged procedures for each cell.
Establish separate mount points on each system for any cell that you
might need to recover on an alternate system in the sysplex.
- On a multisystem sysplex with a shared file system, use a shared
configuration file system when performance is not an issue or when
a shared file system is required to support specific WebSphere Application
Server for z/OS functions. Use nonshared configuration
file system datasets when performance is an issue, or when you must
avoid a single point of failure.
Choosing WebSphere Application Server
home directory names
The WebSphere Application
Server home directory is always relative to the configuration file
system in which it resides. In the Profile Management Tool or the zpmt command,
therefore, you choose the configuration file system mount point on
one panel and fill in just the single directory name for the home
directory on another. But when instructions direct you to go to the
WAS_HOME directory for a server, they are referring to the entire
path name, configuration file system and home directory name combined
(/WebSphere/V8R0/AppServer for example).
You
can choose any name you want for a home directory if it is unique
in the configuration file system. If you are creating a standalone
application server or new managed server node to federate into a
Network Deployment cell, be sure to choose one that is not in use
in the Network Deployment cell's configuration file system.
If
you have one node per system, you might want to use some form of the
node name or system name. Alternatively, you can use "DeploymentManager"
for the deployment manager and "AppServern" for
each application server node.
Relationship between the configuration file system
and the product file system
The configuration file system
contains a large number of symbolic links to files in the product
file system (/usr/lpp/zWebSphere/V8R0 by default).
This allows the server processes, administrator, and clients to access
a consistent WebSphere Application Server for z/OS code
base.
Note that these symbolic links are set up when the WebSphere Application Server home directory
is created and are very difficult to change. Therefore, systems that
require high availability should keep a separate copy of the WebSphere Application Server for z/OS product
file system and product datasets for each maintenance or service level
in use (test, assurance, production, and so forth) to allow system
maintenance, and use intermediate symbolic links to connect each configuration
file system with its product file system.
Tip: If you
configure your Network Deployment environment using the default value
for the product file system path in the Profile Management Tool or
the
zpmt command, it will result in all the nodes
pointing directly at the mount point of the product file system. This
makes rolling maintenance in a nondisruptive manner almost impossible.
If a cell is configured in this way, applying service to the product
file system affects all the nodes at the same time; and if multiple
cells are configured in this way, applying service to the product
file system affects all the cells at the same time. You might want
to specify what is referred to as an "intermediate symbolic link"
between each node's configuration file system and the actual mount
point of the product file system. This strategy is described in the
WebSphere Application Server for z/OS V5
- Planning for Test, Production and Maintenance white paper.
See the
WebSphere z/OS V6
-- WSC Sample ND Configuration white paper for more information
about this issue and its relationship to applying maintenance. See
the
WebSphere for z/OS:
Updating an Existing Configuration HFS to Use Intermediate Symbolic
Links instructions for information on obtaining and using a
utility that would allow you to update an existing configuration file
system to use intermediate symbolic links.
When a WebSphere Application Server for z/OS node
is started, the service level of the configuration is compared against
the service level of the product file system. If the configuration
file system service level is higher than that of the product file
system (probably meaning that an old product file system is mounted),
the node's servers will terminate with an error message. If the configuration
file system service level is lower than that of the product file system
(meaning that service has been applied to the product code base since
the node was last started), a task called the post-installer checks
for any actions that need to be performed on the configuration file
system to keep it up to date.