You can install and configure the batch high performance external scheduler connector.
This connector is the native WSGrid connector that is implemented in a native compiled language and
that uses IBM MQ for communication.
About this task
The
benefit of native WSGrid is twofold:
- It makes more efficient use of z/OS® system
processors by preventing the need for Java™ virtual
machine (JVM) startup processing on each use.
- It uses the most robust messaging service available on z/OS to
ensure reliable operation with a messaging service already known and
used by most z/OS customers.
The authenticated user ID of the environment that starts WSGRID is propagated to the batch job scheduler. The resulting batch job runs by using that user ID. This user ID
must also have sufficient WebSphere® privileges to submit
batch jobs, that is, lradmin or lrsubmitter roles.
For example, if JCL job WSGRID1 is submitted to run under technical user ID TECH1, the resultant batch job also runs under user ID TECH1. User ID TECH1
must be permitted to get and put to and from the IBM MQ input and output queues used by WSGRID.
Procedure
- Set up IBM MQ on z/OS
- Define a server connection channel on IBM MQ to enable the Job Scheduler to communicate with
the queue manager.
For example, the following MQSC command creates the SVRCONN channel.
DEFINE CHANNEL(WSGRID.SVRCONN) CHLTYPE(SVRCONN) TRPTYPE(TCP) REPLACE
- Define IBM MQ queues.
The queue manager must be local. Two queues are required: one for input, one for output. You can
name the queues according to your naming conventions. As an example, the name WASIQ is used for
input queues and the name WASOQ is used for output queues. The queues must be set in shared
mode.
- Create the WSGRID load module.
- Locate the unpack script in the
<was_root>/stack_products/WCG/bin directory. The
unpackWSGRID script is a REXX script.
- Unpack WSGRID using the unpackWSGrid script. To display the command options, issue the
unpackWSGRID script with no input: unpackWSGRID <was_home>
[<hlq>] [<work_dir>] [<batch>]
[<debug>]
The following list of available command options.
- <was_home>
- Specifies the required WebSphere Application Server home directory.
- <hlq>
- Specifies the optional high-level qualifier of output data sets. The default value is
<user id>.
- <work_dir>
- Specifies the optional working directory. The default value is /tmp.
- <batch>
- Specifies the optional run mode for this script. Possible values are
batch or interactive. The default value is
interactive.
- <debug>
- Specifies the optional debug mode. Possible values are debug or
nodebug. The default value is nodebug.
The following example shows output from an unpackWSGrid script where only the
<was_home> value is
specified.
Unpack WSGRID with values:
WAS_HOME=/WebSphere/ND/AppServer
HLQ =USER26
WORK_DIR=/tmp
BATCH =INTERACTIVE
DEBUG =NODEBUG
Continue? (Y|N)
Y
User response: Y
Unzip /WebSphere/ND/AppServer/bin/cg.load.xmi.zip
extracted: cg.load.xmi
Move cg.load.xmi to /tmp
Delete old dataset 'USER26.CG.LOAD.XMI'
Allocate new dataset 'USER26.CG.LOAD.XMI'
Copy USS file /tmp/cg.load.xmi to dataset 'USER26.CG.LOAD.XMI'
Delete USS file /tmp/cg.load.xmi
Delete old dataset 'USER26.CG.LOAD'
Go to TSO and issue RECEIVE INDSN('USER26.CG.LOAD.XMI') to create
CG.LOAD
- Go to TSO, ISPF, option 6 - Command, and do a receive operation.
For
example:
RECEIVE INDSN('USER26.CG.LOAD.XMI')
The
following output is the
result:
Dataset BBUILD.CG.LOAD from BBUILD on PLPSC
The incoming data set is a 'PROGRAM LIBRARY'
Enter restore parameters or 'DELETE' or 'END' +
Click
Enter to end. Output similar to the following output is
displayed.
IEB1135I IEBCOPY FMID HDZ11K0 SERVICE LEVEL UA4
07.00 z/OS 01.07.00 HBB7720 CPU 2097
IEB1035I USER26 WASDB2V8 WASDB2V8 17:12:15 MON
COPY INDD=((SYS00006,R)),OUTDD=SYS00005
IEB1013I COPYING FROM PDSU INDD=SYS00006 VOL=CPD
USER26.R0100122
IEB1014I
IGW01551I MEMBER WSGRID HAS BEEN LOADED
IGW01550I 1 OF 1 MEMBERS WERE LOADED
IEB147I END OF JOB - 0 WAS HIGHEST SEVERITY CODE
Restore successful to dataset 'USER26.CG.LOAD'
***
- Set up the Job Scheduler server that runs on a distributed
operating system.
- Install the system application JobSchedulerMDILP on
the Job Scheduler server or server cluster that runs on a distributed
operating system.
- From the deployment manager, run the installWSGridMQClientMode.py
script with the following input parameters:
./wsadmin.sh
-username <username> -password <userpassword>
-f ../stack_products/WCG/bin/installWSGridMQClientMode.py
- -install
- {-cluster <clusterName> | -node <nodeName>
-server <server>}
- -remove
- {-cluster <clusterName> | -node <nodeName>
-server <server>}
- -qmgr
- <queueManagerName>
- -qhost
- <queueManagerHost>
- -qport
- <queueManagerPort>
- -svrconn
- <serverConnectionChannel>
- -inqueue
- <inputQueueName>
- -outqueue
- <outputQueueName>
For example, for clusters:
./wsadmin.sh -username <username> -password <password>
-f <was_home>/stack_products/WCG/bin/installWSGridMQClientMode.py
-install -cluster <clusterName> -qmgr <queueManagerName>
-qhost <queueHostName> -qport <queuePort> -svrconn
<serverConnectionChannel> -inqueue <inputQueueName>
-outqueue <outputQueueName>
For
example, for servers:
./wsadmin.sh -username <username> -password <password>
-f <was_home>/stack_products/WCG/bin/installWSGridMQClientMode.py
-install -node <nodeName> -server <server> -qmgr <queueManagerName>
-qhost <queueHostName> -qport <queuePort> -svrconn <serverConnectionChannel>
-inqueue <inputQueueName> -outqueue <outputQueueName>
- Restart all Job Scheduler application servers for the
changes to take effect.
Avoid trouble: If security is enabled, the submitter
user ID on the z/OS system must be defined as a user in the lradmin
or lrsubmitter role on the distributed system.
gotcha
Results
You have configured the external job scheduler interface
to communicate with a Job Scheduler server on a distributed operating
system.
What to do next
Submit a job from the external job scheduler interface
to batch.