WebSphere Application Server Network Deployment, Version 6.1
             Operating Systems: z/OS

             Personalize the table of contents and search results
This topic applies only on the z/OS operating system.

Configuration file system

This article describes the planning decisions you need to make when setting up a WebSphere® Application Server for z/OS® configuration file system.

Cell, node, and server settings as well as deployed applications are stored in the WebSphere Application Server for z/OS configuration file system.
Note: You can use a zSeries® file system (zFS) or hierarchical file system (HFS) for the configuration file system.
The following sections present decisions that you must make when setting up your WebSphere Application Server for z/OS configuration file system and give you information on how to make those decisions based on the needs of your planned configuration.

Each node needs a home directory

Every WebSphere Application Server for z/OS node--whether a stand-alone application server, deployment manager, managed application server node, or location service daemon--requires a read/write home directory, sometimes referred to as its WAS_HOME.

This is the structure of a WebSphere Application Server for z/OS configuration file system, mounted at /WebSphere/V6R1. It contains a WebSphere Application Server home directory for a single application server named BBOS001, with a cell and a node both named SYSA.
    /WebSphere/V6R1
        /AppServer
            /bin
            /classes
            /java
            /lib
            /logs
            /profiles
            /default    -> this is the profile_root directory  
            /temp
            ...
       /Daemon
            /config      
                /SYSA
       SYSA.SYSA.BBODMNB       ->  /WebSphere/V6R1/Daemon/config/SYSA/SYSA/BBODMNB
       SYSA.SYSA.BBOS001       ->  
/WebSphere/V6R1/AppServer/profiles/default/config/cells/SYSA/nodes/SYSA/servers/server1
       SYSA.SYSA.BBOS001.HOME  ->  /WebSphere/V6R1/AppServer
The WebSphere Application Server home directory for BBOS001 is named AppServer. It contains directories with complete configuration information for the SYSA node and the BBOS001 server.
The /Daemon directory contains configuration information for location service daemons defined to nodes in this configuration file system.
Note: The /Daemon/config subdirectory is subdivided by cell name. If the cells have different short names, the location service daemon information for each is kept separate.
The daemon home directory has the fixed WebSphere Application Server home name Daemon.

Symbolic links are used to access startup parameters

In addition to the WebSphere Application Server home directories themselves, the configuration file system contains a multipart symbolic link for each server that points to the startup parameters for the server. The symbolic link is named cell_short_name.node_short_name.server_short_name.

The sample configuration file system above contains a symbolic link SYSA.SYSA.BBODMNB to start the location service daemon and a symbolic link SYSA.SYSA.BBOS001 to start the BBOS001 application server. The second symbolic link is specified in the ENV parameter on the START command when the server or location service daemon is started from the MVS™ console:

START procname,JOBNAME=BBOS001,ENV=SYSA.SYSA.BBOS001

Each symbolic link points to the subdirectory where the server's was.env file resides. This file contains the information required to start the server.

Note: During post-install processing, described below, the server JCL needs to specify the WebSphere Application Server home directory itself, rather than the location of the was.env file. This is the purpose of the SYSA.SYSA.BBOS001.HOME symbolic link shown above.

Sharing the configuration file system between cells

Two or more WebSphere Application Server for z/OS cells (stand-alone application server, Network Deployment, or both) can share a WebSphere Application Server for z/OS configuration file system, provided the following conditions are met:
  • All cells using the configuration file system must be set up using the same common groups and users. In particular, each must have the same administrator user ID and configuration group.
  • The cells must have distinct cell short names.
  • Each node must have its own WAS_HOME directory that is not shared with any other node or cell.
As noted above, you can share the daemon home directory (/Daemon) between cells, as it has subdirectories farther down for each cell in the configuration file system.
Note: Be aware that sharing a configuration file system between cells increases the likelihood that problems with one cell might cause problems with other cells in the same configurations file system.

Sharing the configuration file system between systems

Two or more z/OS systems can share a configuration file system, provided the z/OS systems have a shared file system and the configuration file system is mounted R/W. All updates are made by the z/OS system that "owns" the mount point. For a Network Deployment cell, this is generally the z/OS system on which the cell deployment manager is configured.

Choosing a WebSphere Application Server for z/OS configuration file system mount point

The choice of WebSphere Application Server for z/OS configuration file system mount points depends on your z/OS system layout, the nature of the application serving environments involved, and the relative importance of several factors: ease of setup, ease of maintenance, performance, recoverability, and the need for continuous availability.

In a single z/OS system:

If you run WebSphere Application Server for z/OS on a single z/OS system, you have a wide range of choices for a z/OS configuration file system mount point. You might want to put several stand-alone application servers in a single configuration file system with a separate configuration file system for a production server or for a Network Deployment cell. Using separate configuration file system data sets improves performance and reliability, while using a shared configuration file system reduces the number of application server cataloged procedures you need.

You might have one configuration file system with your development, test and quality assurance servers, all in the same common groups and uses as in the following example:
    /WebSphere/V6_test
                  /DevServer    - home to stand-alone server DVCELL, with server DVSR01A 
                  /TestServer1  - home to stand-alone server cell T1CELL, with server T1SR01A 
                  /TestServer2  - home to stand-alone server cell T2CELL, with server T2SR01A
                  /QAServer     - home to Network Deployment cell QACELL, with deployment manager QADMGR and server QVSR01A
and a separate configuration HFS for your production cell:
    /WebSphere/V6_prod
                  /CorpServer1  - home to Network Deployment cell CSCELL, with deployment manager CSDMGR and server CSSR01A

In a multisystem z/OS sysplex with no shared HFS:

In a multisystem sysplex with no shared HFS, each z/OS system must have its own configuration file system data sets. For stand-alone application servers and for Network Deployment cells that do not span systems, the options are the same as for a single z/OS system.

For Network Deployment cells that span systems:

Here you have two options:
  • You can use a different mount point for the cell's configuration file system data sets on each system. This allows you to move nodes easily between systems (if a system becomes inoperative or is being upgraded for example), since each mount point is unused on the other systems in the sysplex, allowing you to mount the failed system's configuration file system data sets on an alternate system in the sysplex.

    On system LPAR1, for example, you might have a configuration file system for one part of a cell:
        /var/WebSphere/V6config1
                          /DeploymentManager  - home to deployment manager F1DMGR in cell F1CELL
                          /AppServer1         - home to node F1NODEA and servers F1SR01A and F1SR02A
    with a second configuration file system on LPAR2:
        /var/WebSphere/V6config2
                          /AppServer2         - home to node F1NODEB and servers F1SR02B (clustered) and F1SR03B
    This setup has the advantage that you can move the deployment manager and node F1NODEA to LPAR2 or move node F1NODEB to LPAR1. The disadvantage of this configuration is that F1NODEA and F1NODEB will require separate sets of cataloged procedures.
  • Or you can use the same mount point for all configuration file system data sets in a particular cell. This allows you to use common cataloged procedures and make the systems look very similar.

    Using the same cell setup as above, node LPAR1 would have one configuration file system:
        /var/WebSphere/V6F1
                          /DeploymentManager  - home to deployment manager F1DMGR in cell F1CELL
                          /AppServer1         - home to node F1NODEA and servers F1SR01A and F1SR02A
    and LPAR2 would have a separate file system at the same mount point:
        /var/WebSphere/V6F1
                          /AppServer2         - home to node F1NODEB and servers F1SR02B (clustered) and F1SR03B
    However, relocation of either LPAR's node(s) to the other system would require merging a copy of one configuration file system into the other.

In a multisystem z/OS sysplex with a shared HFS:

If your sysplex has a shared hierarchical file system, you can simply mount a large configuration file system for the entire cell. When using the Customization Dialog, specify the common configuration file system mount point on each system. As noted above, you should update the configuration file system from the z/OS system hosting the deployment manager. Performance will depend on the frequency of configuration changes, and ensure you devote extra effort to tuning if this option is chosen.

Alternatively, you can mount a separate configuration file system on each system, perhaps using the system-specific file system mounted at /&SYSNAME on each system:
    /LPAR1/WebSphere/V6F1
                        /DeploymentManager  - home to deployment manager F1DMGR in cell F1CELL
                        /AppServer1         - home to node F1NODEA and servers F1SR01A and F1SR02A

    /LPAR2/WebSphere/V6F1
                        /AppServer2         - home to node F1NODEB and servers F1SR02B (clustered) and F1SR03B
Each system (LPAR1 and LPAR2) mounts its own configuration file system on its system-specific mount point. When using the Customization Dialog, specify the following:
  • /LPAR1/WebSphere/V6F1 on LPAR1
  • /LPAR2/WebSphere/V6F1 on LPAR2
Performance is better with this option than with a shared sysplex, and, depending on choice of mount point, it might be possible to mount a configuration file system temporarily on the other LPAR if the original owner is down. You can make cataloged procedures system-specific or use &SYSNAME to select the configuration file system mount point.
If you really want to use the same apparent mount point for all configuration file system data sets, you can use symbolic links to redirect a common mount point to a different file system on each system:
  • ln -s $SYSNAME/WebSphere WebSphere
  • Mount LPAR1's configuration file system at /LPAR1/WebSphere/V6F1.
  • Mount LPAR2's configuration file system at /LPAR2/WebSphere/V6F1.
If this is done correctly, you can specify a configuration mount point of /WebSphere/V6F1 for each system in the Customization Dialog and still enjoy the benefits of system-specific customization file system data sets. However, when this setup is used, it is not possible to easily move configuration file system data sets from one system to another. All nodes expect to find their data in /WebSphere/V6F1, and you can mount only one configuration file system at this mount point on each system.
Recommendations:
  • On a single z/OS system:
    • Create a configuration file system at /WebSphere/V6R0 and use it to create a "practice" stand-alone server. Place home directories for additional non-production stand-alone application servers in the same configuration file system.
    • Create a separate configuration file system at /WebSphere/V6R0_cell_short_name for each production stand-alone server or Network Deployment cell.
  • On a multisystem sysplex with no shared file system, follow the recommendations above for a single z/OS system. This will allow you to use common cataloged procedures for each cell. Establish separate mount points on each system for any cell that you might need to recover on an alternate system in the sysplex.
  • On a multisystem sysplex with a shared file system, use a shared configuration file system when performance is not an issue or when a shared file system is required to support specific WebSphere Application Server for z/OS functions. Use nonshared configuration file system data sets when performance is an issue, or when you must avoid a single point of failure.

Choosing WebSphere Application Server home directory names

The WebSphere Application Server home directory is always relative to the configuration file system in which it resides. In the Customization Dialog, therefore, you choose the configuration file system mount point on one panel and fill in just the single directory name for the home directory on another. But when instructions direct you to go to the WAS_HOME directory for a server, they are referring to the entire path name, configuration file system and home directory name combined (/WebSphere/V6R1/AppServer for example).

You can choose any name you want for a home directory if it is unique in the configuration file system. If you are creating a stand-alone application server or new managed server node to federate into a Network Deployment cell, be sure to choose one that is not in use in the Network Deployment cell's configuration file system.

If you have one node per system, you might want to use some form of the node name or system name. Alternatively, you can use "DeploymentManager" for the deployment manager and "AppServern" for each application server node.

Relationship between the configuration file system and the product HFS

The configuration file system contains a large number of symbolic links to files in the product HFS (/usr/lpp/zWebSphere/V6R1 by default). This allows the server processes, administrator, and clients to access a consistent WebSphere Application Server for z/OS code base.

Note that these symbolic links are set up when the WebSphere Application Server home directory is created and are very difficult to change. Therefore, systems that require high availability should keep a separate copy of the WebSphere Application Server for z/OS product HFS and product data sets for each maintenance or service level in use (test, assurance, production, and so forth) to allow system maintenance, and use intermediate symbolic links to connect each configuration HFS with its product HFS.

Tip: If you configure your Network Deployment environment using the default value for the product HFS path in the Customization Dialog, it will result in all the nodes pointing directly at the mount point of the product HFS. This makes rolling maintenance in a nondisruptive manner almost impossible. If a cell is configured in this way, applying service to the product HFS affects all the nodes at the same time; and if multiple cells are configured in this way, applying service to the product HFS affects all the cells at the same time. You might want to specify what is referred to as an "intermediate symbolic link" between each node's configuration HFS and the actual mount point of the product HFS. This strategy is described in the WebSphere Application Server for z/OS V5 - Planning for Test, Production and Maintenance white paper. See the WebSphere z/OS V6 -- WSC Sample ND Configuration white paper for more information about this issue and its relationship to applying maintenance. See the WebSphere for z/OS: Updating an Existing Configuration HFS to Use Intermediate Symbolic Links instructions for information on obtaining and using a utility that would allow you to update an existing configuration HFS to use intermediate symbolic links.

When a WebSphere Application Server for z/OS node is started, the service level of the configuration is compared against the service level of the product file system. If the configuration file system service level is higher than that of the product file system (probably meaning that an old product file system is mounted), the node's servers will terminate with an error message. If the configuration file system service level is lower than that of the product file system (meaning that service has been applied to the product code base since the node was last started), a task called the post-installer checks for any actions that need to be performed on the configuration file system to keep it up to date. For more information about the post-installer, see Applying product maintenance.

Concept topic    

Terms of Use | Feedback

Last updated: Feb 25, 2009 9:32:38 AM CST
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.websphere.nd.multiplatform.doc/info/ae/ae/cins_planfs.html