You can install and configure the batch high performance external scheduler connector.
This connector is the native WSGrid connector that is implemented in a native compiled language and
that uses IBM MQ for communication.
About this task
The
benefit of native WSGrid is twofold:
- It makes more efficient use of z/OS® system
processors by preventing the need for Java™ virtual
machine (JVM) startup processing on each use.
- It uses the most robust messaging service available on z/OS to
ensure reliable operation with a messaging service already known and
used by most z/OS customers.
The authenticated user ID of the environment that starts WSGRID is propagated to the lote job scheduler. The resulting lote job runs by using that user ID. This user ID
must also have sufficient WebSphere® privileges to submit
lote jobs, that is, lradmin or lrsubmitter roles.
For example, if JCL job WSGRID1 is submitted to run under technical user ID TECH1, the resultant lote job also runs under user ID TECH1. User ID TECH1
must be permitted to get and put to and from the IBM MQ input and output queues used by WSGRID.
Procedure
- Define WebSphere MQ
queues.
Queue manager must be local. Two queues are
required: one for input, one for output. You can name the queues according
to your naming conventions. As an example, the name WASIQ is used
for input queues and the name WASOQ is used for output queues. The
queues must be set in shared mode.
- Update the MQ_INSTALL_ROOT WebSphere variable.
- In the administrative console, click .
- Select the node scope where the job scheduler runs.
- Select MQ_INSTALL_ROOT .
- For Value, put in the directory
path to where WebSphere MQ
is installed.
For example, Value can
be /USR/lpp/mqm/V6R0M0.
- Click Apply and save the changes.
- From the deployment manager, run the installWSGridMQ.py script
with the following input parameters:
The installWSGridMQ.py script
installs a system application, and then sets up the JMS connection
factory, JMS input and output queues, and other necessary parameters.
wsadmin.sh
-f -user <username> -password <userpassword>
installWSGridMQ.py
- -install | -install <APP | MQ>
- {-cluster <clusterName> | -node <nodeName> -server
<server>}
Note: MQ parameters are not required when doing an APP install.
- -remove | -remove <APP | MQ>
- {-cluster <clusterName> | -node <nodeName>
-server <server>}
Note: MQ parameters are not required when doing an APP remove.
- -qmgr
- <queueManagerName>
- -inqueue
- <inputQueueName>
- -outqueue
- <outputQueueName>
For example, for clusters:
wsadmin.sh -f installWSGridMQ.py -install -cluster <clusterName> -qmgr <queueManagerName>
-inqueue <inputQueueName> -outqueue <outputQueueName>
For example, for nodes:
wsadmin.sh -f installWSGridMQ.py -install -node <nodeName> -server <serverName>
-qmgr <queueManagerName> -inqueue <inputQueueName> -outqueue <outputQueueName>
For example, for installing only the Application at the cluster
level:
wsadmin.sh -f installWSGridMQ.py -install APP -cluster <clusterName>
For example, for installing only the MQ components at the node/server
level:
wsadmin.sh -f installWSGridMQ.py -install MQ -node <nodeName> -server <serverName>
- Run osgiCfgInit.sh|.bat -all for each
server whose MQ_INSTALL_ROOT WebSphere variable
you updated in a previous step.
The osgiCfgInit command
resets the class cache that the OSGi runtime environment uses.
- Create the WSGRID load module:
- Locate the unpack script in the app_server_root/bin directory.
The unpackWSGRID script is a REXX script.
- Perform an unpack using the unpackWSGrid script. To display the command options, issue the unpackWSGRID script
with no input: unpackWSGRID <was_home> [<hlq>]
[<work_dir>] [<batch>]
[<debug>]
- <was_home>
- Specifies the required WebSphere Application
Server home directory.
- <hlq>
- Specifies the optional high-level qualifier of output data sets.
The default value is <user id>.
- <work_dir>
- Specifies the optional working directory. The default is /tmp.
- <batch>
- Specifies the optional run mode for this script. Specify batch or interactive.
The default is interactive.
- <debug>
- Specifies the optional debug mode. Specify debug or nodebug.
The default is nodebug.
/u/USER26> unpackWSGRID /WebSphere/ND/AppServer
Sample
output:
Unpack WSGRID with values:
WAS_HOME=/WebSphere/ND/AppServer
HLQ =USER26
WORK_DIR=/tmp
BATCH =INTERACTIVE
DEBUG =NODEBUG
Continue? (Y|N)
Y
User response: Y
Unzip /WebSphere/ND/AppServer/bin/cg.load.xmi.zip
extracted: cg.load.xmi
Move cg.load.xmi to /tmp
Delete old dataset 'USER26.CG.LOAD.XMI'
Allocate new dataset 'USER26.CG.LOAD.XMI'
Copy USS file /tmp/cg.load.xmi to dataset 'USER26.CG.LOAD.XMI'
Delete USS file /tmp/cg.load.xmi
Delete old dataset 'USER26.CG.LOAD'
Go to TSO and issue RECEIVE INDSN('USER26.CG.LOAD.XMI') to create
CG.LOAD
- Go to TSO, ISPF, option 6 - Command, and do a receive
operation.
For example:
RECEIVE INDSN('USER26.CG.LOAD.XMI')
The
following output is the result:
Dataset BBUILD.CG.LOAD from BBUILD on PLPSC
The incoming data set is a 'PROGRAM LIBRARY'
Enter restore parameters or 'DELETE' or 'END' +
Click
Enter to
end. Output similar to the following output is displayed.
IEB1135I IEBCOPY FMID HDZ11K0 SERVICE LEVEL UA4
07.00 z/OS 01.07.00 HBB7720 CPU 2097
IEB1035I USER26 WASDB2V8 WASDB2V8 17:12:15 MON
COPY INDD=((SYS00006,R)),OUTDD=SYS00005
IEB1013I COPYING FROM PDSU INDD=SYS00006 VOL=CPD
USER26.R0100122
IEB1014I
IGW01551I MEMBER WSGRID HAS BEEN LOADED
IGW01550I 1 OF 1 MEMBERS WERE LOADED
IEB147I END OF JOB - 0 WAS HIGHEST SEVERITY CODE
Restore successful to dataset 'USER26.CG.LOAD'
***
- Restart the servers that you just configured. Also, restart
the node agents.
Results
You have configured an external job scheduler interface.
What to do next
Submit a job from the external job scheduler interface
to lote.