Document Number SC31-6901-00
Trident Services and E.S.A. Software makes no warranty of any kind, expressed or implied, with regard to the programs or documentation. Trident Services and E.S.A. Software shall not be liable in any event for incidental or consequent damages in connection with or arising out of the furnishing, performance, or use of these programs.
Information in this manual is subject to change without notice and does not represent a commitment on the part of the vendor. The software described in this manual is furnished under a license agreement, and may be used or copied only in accordance with the terms of that agreement.
IBM Operating System Environment Manager (OSEM) for z/OS. Licensed materials - Property of IBM. 5799-HAX
(c) Copyright IBM Corp 2005. All rights reserved.
(c) Copyright E.S.A. Software 1990-2005. All rights reserved.
No parts of this publication may be copied or distributed, transmitted, transcribed, stored in a retrieval system, translated into any human or computer language, or disclosed to third parties without the express written permission of IBM Corp or E.S.A. Softare.
The following are trademarks of IBM Corp:
The following are trademarks of Computer Associates International:
First Edition (April 2005)
This edition applies to Operating System Environment Manager for z/OS (OSEM for z/OS) Version 6 Release 0 Modification 0 (Program Number 5799-HAX).
System Controls, Maintenance & Installation Functions
Appendix C. Define Dataset Name Groups
Appendix D. Define Volume Groups
Appendix F. JES2 Commands for Job Routing
Appendix G. JCL Statements for Job Routing
Appendix H. $HASP Messages for Job Routing
Appendix I. MVS Commands for Tape Share
The following enhancements have been made to OS/EM version 6.0:
You may now specify up to three IDs to be notified in the case of a user exit abend. You may also create notification groups where each ID within the group will receive a TSO send message.
You may optionally specify a user ID or notify group name for each major section of OS/EM, i.e. ALLOCATION, SMF, HSM, etc.
The following enhancements have been made to OS/EM version 5.6:
See Exit 4 in the Reference Manual, and Miscellaneous Controls in the User Guide.
The STEPENDWTO message has been enhanced to show the CPU time and I/O counts. This is an optional feature and the original message is still available for customers using an automation package to trap the message.
See Exit 5 in the Reference Manual, or option 1 on the Job Routing Controls Menu.
A record number must be assigned to OS/EM for this function to become active. See "SMF Recording" for instructions on assigning a record number.
Any job which does not have a resource attached to it will receive this new default resource.
OS/EM can scan for the keyword SCHENV= on the JOBCARD statement and remove it. It then inserts an OS/EM Job routing JECL statement using the scheduling environment name just removed as the resource name.
OS/EM can set a job's system affinity (SYSAFF) to ANY, if, and only if, the job has been assigned one or more OS/EM Job Route resourdces. The job route resources may be from either JECL control cards (/*ROUTE resource) or automatically generated.
OS/EM can now control the amount of storage given to a job above the 2 gigabyte bar. You may specify anything between zero for nothing above the bar to a maximum of 16 exabytes.
See SMF exit OS$USI in the Reference Manual or option 7 on the JCL Controls Menu.
See HSM exit ARCRPEXT in the Reference Manual, or option 8 on the HSM Optimizer Menu.
OS/EM will allow a user to read any tape dataset with the following criteria is met, thus bypassing the RACF PROTECALL(FAIL) option:
The following enhancements have been made to OS/EM version 5.5:
The Estimated Cost function of OS/EM can be used to calculate an approximate charge for running each step of a job and an approximate total cost of running the job. The costs are presented in the "flower box" produced by requesting OS/EM's STEP/JOB-end statistics.
This function specifies that any files coded with a retention setting of DELETE and the program name is IEFBR14 will be deleted by OS/EM. No DFSMSHSM RECALL will be performed. Instead a HDELETE will be generated.
This function can be used to place up to 32 bytes of JOB or STEP accounting information into the catalog record for a newly created VSAM dataset or SMS-managed non-VSAM dataset. Additionally, the JOB's User ID is placed into the Owner field of the catalog record. Neither of these fields is overridden if the information has already been provided.
OS/EM (Operating System/Environment Manager) is a dynamic exit manager and a set of optional, standard control exits for the OS/390 and z/OS environments. As a dynamic exit manager, it provides a consistent, easy-to-use interface to most exit points provided by IBM to enhance the OS/390 and z/OS environment.
The Extended OS/EM Functions provide most options commonly included in exits written by Systems Programmers, without the overhead associated with developing, maintaining, testing and implementing those exits. The ISPF interface also allows the changing of Extended OS/EM Functions without an IPL.
OS/EM can supply functions that incorporate many of the features which user exits are commonly intended. In many cases, the Extended Functions will provide all of the services required by your installation without any coding.
Where applicable, the exits have a WARN mode for the gradual introduction of the new functions.
These parameter-driven exits enable your installation to achieve:
The benefits to this approach apply to installations new and old.
OS/EM provides your OS/390 or z/OS installation with:
With the ever-increasing size of host networks and seven-day, 24-hour service requirements, availability has become the keyword as far as both users and the operating system support staff is concerned. Your installation needs the system to function to carry on the business of the business, and your support staff needs the system to install program products, "tune" resource control functions, apply maintenance, etc.
OS/EM allows your system staff to install any product or user-written control function that uses an OS/390 or z/OS SMF, TSO, JES2, JES3, RACF, HSM, DADSM or allocation exit without requiring an IPL.
OS/EM enhances system reliability by allowing your systems staff to thoroughly test new exits in the same production environment in which they will be running. A standard OS/EM function is to remove any exit which abends, thereby allowing normal production work to proceed. This allows the systems staff to do more thorough testing because the testing process will not have a negative impact on your system's integrity.
Another standard OS/EM function is to limit, by jobname, the scope of SMF, TSO, RACF, DADSM, and some JES2/JES3 exits. This facility will allow the testing of new exits without impacting the function of existing exits.
OS/EM allows your installation to have a standard operating environment, whether on a single processor or multiple processors, by allowing all exit modules to exist outside of the operating system. Trying to stay vanilla is the very reason OS/EM was developed; you can now have the controls/products you need while still keeping a vanilla operating system without reliability exposure, availability interruptions, or system modification problems. Variations from the standard IBM supplied OS/390 or z/OS environments, such as those supplied by program products or user-written control functions, no longer require an IPL or system modification (SMP/E). Loading or reloading any of these exits can now be done via a TSO command (or ISPF dialogue).
Since OS/EM manages the loading of exits, SMP/E is not needed to install exits into the operating system. While useful for any exit, this standard OS/EM function greatly simplifies the installation and maintenance of program products or user-written functions that need to share exits. OS/EM allows multiple exits sharing an exit point to exist independently; therefore SMP/E user modifications are not needed.
OS/EM replaces all IBM supplied SMF, TSO, JES2, JES3, RACF, ISPF, SAF, Allocation and HSM exits with its own control processor. This processor is installed at IPL time. OS/EM then dynamically loads and processes your installation's exits whether they are user-written, program products (job schedulers, report distribution systems, etc.), or OS/EM's optional control functions.
At any time after the IPL you may:
OS/EM has the ability to manage up to 255 modules per exit point. Using the Extended Functions does not restrict this number. However, stringing together multiple modules at a given exit point assumes that each module can work together. The functioning of an exit point may require that only one module can be "active", the other modules being "passive".
For example, TSO exit IKJEFF10 (the TSO SUBMIT exit) is normally used to alter or produce additional job statements. Multiple modules doing such would not seem prudent.
A list of currently supported exits is documented in Appendix A, "Supported Exits".
The OS/EM system is comprised of the following five main components:
OS$IPL | This program obtains storage for the OS/EM CVT (Communications Vector Table), which is required by the OS/EM control process. This program is run as part of the IPL process and uses the OS/390 or z/OS sub-system interface to establish the OS/EM environment. |
OS$INIT | This program is started at IPL time by the OS$IPL program. It attaches the TSO control program IKJEFT01 to process the initial OS$CNTL commands before JES2 starts. |
OS$CNTL | This program is the main program of process. It is a TSO command processor that checks the command function (the first operand on the command) and calls the appropriate modules to process the request. Before the OS$CNTL command can be used, the OS$IPL program must have been run to create the OS/EM environment. |
Interface modules | These serve as the control facility to invoke the dynamically loaded exits that perform the actual exit functions. |
Dynamic exits | These are the exits for your program products, in house coded exits, and OS/EM Extended Functions which are loaded by the OS$CNTL command processor. |
In order to receive OS/EM message numbers under ISPF or TSO, the MSGID parameter in your TSO profile must be set on.
The following command may be issued to set OS/EM message numbers on, under ISPF or TSO.
TSO PROFILE MSGID
The ISPF interface provides for the creation of the necessary OS/EM initialization parameters, and provides for the execution of OS/EM commands online. The interface has a function orientation. That is, the Extended OS/EM Functions are presented without regard to the OS/EM commands or exits that implement the function. The intent is to make OS/EM as accessible as possible.
The OS/EM Primary Option Menu provides for two major processing options. The 'Basic Exit Function' provides for the specification and management of all the OS/EM supported exit points. The Extended OS/EM Functions provides the support for DASD controls, QuickPool, JCL Controls, Job Controls, HSM Optimizer, HSM Reports, RACF Controls Device Restriction, ISPF File Prefix, Job Routing, Tape Share, SVC Delete/Replace Controls and Time Controls. Although initialization member generation and command generation bring the two processes together, the actual specification of basic and extended functions are independent of one another. The only requirement is that an exit point's OPTIONS be specified. The interface will ensure that this is true. This means that if your installation has no exits of its own, but you wish to use OS/EM Extended Functions, you will not have to be concerned with specifying basic functions.
The entries you make are saved from one use of the interface to the next. Each time you use the interface for a particular command, your last entries will be presented for any changes you wish to make.
The interface saves all information in ISPF tables. This enables multiple users of the interface, each of who have access to the same information. However, only one user at a time may use the interface.
The required tables are not shipped with the OS/EM install package. They are generated the first time you invoke any of the interface functions. The amount of time required for this generation varies depending on your hardware and the work being done at the time of generation. Each time a particular function's tables are generated, a panel is presented indicating that tables are being generated. Some tables, such as volume and dataset name group tables, are generated only as required.
You may elect to generate the ISPF tables all at one time. To accomplish this, select option 1 from the OS/EM primary options menu.
All OS/EM IPSF panels conform to standard display and data input conventions. Each panel has an ISPF command line at the top of the display (indicated by COMMAND ===>) and accepts the applicable ISPF commands.
The most commonly used ISPF commands are:
You have specified GLOBAL ALLOW entries in the QuickPool function. Then you start specifying GLOBAL DISALLOW entries, change your mind and CANCEL. You have only canceled the DISALLOW entries, not the ALLOW entries you have already completed. If you have any doubts about what has been canceled, you should review your entries and make adjustments as necessary.
Where necessary, panels contain "scrollable" areas that allow you to specify as many entries (such as volume and dataset name groups) as required by your installation. Panels with scrollable viewing are indicated by the presence of the SCROLL field in the top right hand corner of the panel. These panels support ISPF scrolling and location commands. The commands typically used are:
L PROD will position the list to the entry with the value PROD. If there is no entry that exactly matches the value specified in the L command, the display will be positioned at the first entry that is alphabetically & numerically higher.
First use of the interface will present you with empty fields (of the appropriate type) which you modify. Additional entries are made by inserting new, blank fields; or by using an ADD command and overtyping existing information. Provision is made to allow you to delete entries that are no longer needed, while ensuring that information necessary to the successful operation of OS/EM is not deleted.
PF Key Usage
The Program Function (PF) keys supported by the OS/EM ISPF panels are:
PF1 | Display HELP information |
PF2 | Split display screen at cursor |
PF3 | Return to previous menu (updates saved) |
PF7 | Scroll up |
PF8 | Scroll down |
PF9 | Swap display panels |
PF12 | Return to previous menu (updates discarded) |
The bulk of a function's parameters/options are specified by entering either a YES or NO value, or leaving the option blank. Entering a YES will enable the option. Entering a NO will disable the option, etc. Once entered, each parameter and option will display with your last entry until you change it.
Where appropriate, you may enter descriptions that can serve as documentation. For example, each volume and dataset name group may have an optional description associated with it. You may use this description to describe the function of the group, document who created the group, etc. The description fields are provided strictly for your use and are included in the generated initialization commands for documentation.
All OS/EM commands are generated via ISPF skeleton processing. If an initialization member is requested, the final output is placed in the dataset pointed to by DD name OS$FILE which is automatically allocated when you enter the ISPF interface. If the command is issued online, the final output is executed via a TSO EXEC command.
The following initialization members are currently generated:
If you browse any of the initialization members, you will note that each exit point is generated as a separate OS/EM command. This is not an OS/EM requirement but it makes the commands easier to "read". Comments are included to help document what the command is for, and to document the user who last generated the command (along with date and time). If you use the description fields, they will be included as comments in the generated commands.
Do not EDIT the initialization commands. All maintenance of the initialization members should be done through the interface. Any changes you make by editing the member will not be included the next time you use the interface unless you have executed OSV6 and used the REBUILD command to resync the interface.
An extensive set of HELP screens is supplied for the ISPF interface. These screens will guide you through the various fields on their "owning" panels and explain the use/contents of the fields.
The OS/EM ISPF Interface is reached either by selecting the OS/EM option from an existing ISPF Menu screen (assuming you created an OS/EM option on some existing ISPF Menu screen during the installation process), or by entering the command OS$START from the TSO READY prompt or ISPF Option 6 (TSO Command Processor).
The Primary Option Menu (refer to Figure 1) presents several selections. Each option presents another selection menu, taking you down the path you have chosen.
|
Each of these paths is presented in the following sections:
1 | System Level Controls (see "System Controls, Maintenance & Installation Functions") |
2 | Basic Exit Functions (see "Basic Exit Functions") |
3 | Extended OS/EM Functions (see "Extended OS/EM Functions") |
4 | Query OS/EM Status (see "Query OS/EM Status") |
5 | Reload Exits (see "Reload Exits") |
6 | Set JES2 name (see "Set JES2 Name") |
7 | Execute Pending Changes (see "Execute Pending Changes") |
8 | Build Initialization Member (see "Build Initialization Member") |
T | ISPF Tutorial |
X | Exit OS/EM |
This menu is divided into three sections:
Figure 2. Setup and Maintenance
|
Enter the number for the function that needs to be performed. The appropriate panel will then be displayed.
Each of these paths is presented in the following sections:
System Level Controls | |
Maintenance | See "Maintenance" |
Installation | See "Installation" |
The Authorization Codes function is used to authorize OS/EM to execute on your installation's CPU(s).
|
Enter the Authorization Code supplied with your installation materials.
When your order for OS/EM was placed, you were asked for the four low-order digits of the CPUID you will be running on. Therefore you need supply only one CPUID if your CPU contains more than one processor.
Each CPU that you intend running OS/EM on must have an authorization code. Multiple authorization codes are allowed in the initialization member so that a single initialization member can be used for all the CPUs at your site.
Note: You may also want to add authorization codes for your disaster recovery site so that there will be no problems if you have to execute offsite.
There are 3 line commands available:
A | Add a new code. Enter 'A' in the SEL column and overtype any existing information and press enter. |
D | Delete an existing code. Enter 'D' in the SEL column to delete an entry no longer needed. |
S | Select an existing code to update the description. Enter 'S' in the SEL column to update the description field. The authorization code itself may not be updated. If an incorrect code is entered, you will need to re-add it as a new code, then delete the incorrect code. |
Warning messages will be issued starting 30 days before expiration of the authorization code. You will need to obtain a new code within that time.
By default OS/EM will produce the message OS$DCN031 *WARNING* OS/EM WILL EXPIRE IN xx DAYS every hour for the entire month before expiration.
Figure 4. Expiration Warning Message Control
|
Some customers have found this to be distracting and have requested a way to turn off the warning message. This function will allow you to suppress the message.
Note: Suppressing this message may be unwise as OS/EM will fail to operate once your current authorization code has expired.
This function defines the TSO users who are to be notified in the event of an ABEND.
|
TSO user IDs can be defined explicitly to one or more exit functions and/or to one or more user groups which are subsequently defined to the desired exit functions.
Each exit function can have a maximum of three user definitions. Therefore, it is often recommended that user groups be used.
Selection Options:
1 | Define & maintain notification user groups. |
2-12 | Define users/groups to receive ABEND notification for specific user exit functional areas. |
13 | Define users/groups to receive ABEND notification for any OS/EM exit. |
14 | Defines users/groups to receive ABEND notification for any user exit. |
Options 2 through 14 have the same selection panel and so will not be described individually.
This function provides the ability to group multiple users into a single logical entity that can be used for ABEND notification. Up to 32 user groups can be defined.
When this function is entered, the list of group names is displayed. PF7 / PF8 scrolls backwards/forwards through the group list.
Figure 6. Define User Groups Menu
|
The S line command selects the entry to be defined or altered. The group name and description fields can be entered or modified (Caution: altering the group name may have adverse effects on existing abend notification lists).
When Enter is pressed, the following entry box will be displayed:
Figure 7. Define User IDs for a Group
|
Enter the user name(s) in the available fields (1 through 8). PF3 completes the user name definition and returns to the user group list panel.
User Notes:
Options 2 through 12 of the Notify Menu panel maintains the notification lists for user exit abends. When any of these options is selected, the following panel is displayed:
Figure 8. Define IDs or Groups
|
Enter the TSO user IDs and/or user groups to be notified of an abend. PF3 completes the update process and the user is returned to the Notify Menu.
OS/EM can create SMF records to track each execution of the OS$CNTL command and it's output. Job Routing changes also create SMF records.
|
Field entry is as follows:
Enter Yes or No to control the creation of the OS/EM SMF records.
Enter the number of the record type you want OS/EM to use.
Note: This number may also be specified on the OSV6 subsystem PROCLIB member. Be sure that it is the same number if it is specified in both places. See Step 7: Define subsystem name OSV6 in the OS/EM Reference manual.
OS/EM can track the number of times an exit is called and the CPU time each exit took to execute. These values are displayed on the OS/EM Query Report. Because tracking these values adds overhead to your system, it is sugessted that you normally leave this tracking function disabled.
Figure 10. OS/EM Performance Stats
|
Enter YES to enable performance tracking, enter NO to disable tracking.
The OS/EM ISPF interface allows you to execute online (or via batch) the changes you have made to the different options. To make the changes effective across IPLs, the INIT members have to be updated.
To remind you of this needed function, a warning pop-up window is displayed each time you execute the changes.
You may disable this message with the WARN System Level Control.
Figure 11. OS/EM Execution Warn Mode Panel
|
Enter NO to turn off the warning pop-up. Enter YES to keep the pop-up reminder.
Note: This setting is stored in the individual users ISPF profile dataset. As such this setting applies to the individual OS/EM user.
This section allows you to remove old entries from the Pending Changes table and syncronize the tables used by the ISPF interface with the currently active options on the system running OS/EM.
The Pending Changes Maintenance function is used to clean up the Pending Changes table by deleting changes that have been permanently implemented by having the initialization members built (see "Build Initialization Member").
This function is particularly useful when frequent changes are being made to OS/EM (e.g. initial setup and tuning) because it reduces the amount of data in the table (all changes to the OS/EM system are recorded in the Pending Changes table).
Note:
Figure 12. Pending Changes Maintenance
|
Field entry is as follows:
The Rebuild Function reconstructs the ISPF tables from the current OS/EM system environment.
The function first executes the Query command to obtain the OS/EM information currently in storage then deletes the old tables and recreates them from the information obtained from the query command. Any descriptions which have been previously entered will be copied from the original ISPF tables before they are deleted.
The Rebuild Function can be of great use when changes have been made to the OS/EM system environment without going through the ISPF interface.
|
Field entry is as follows:
If used, be sure to enclose the DSN in apostrophes (single quotes), otherwise your TSO ID will be appended to the front of the dataset name.
Again, use apostrophes to qualify the dataset name.
Note: If you have valid changes pending, they should be executed prior to using this function, or those changes will be lost. See "Execute Pending Changes" for more information on this process.
Figure 14. ISPF Table Rebuild Utility
|
Since this process takes several minutes to complete, the above panel is displayed to let you know what processing is currently being done.
The create process is used when OS/EM is first installed. The function creates all of the ISPF tables which the ISPF interface uses to store the information needed to build the initialization parameter members used at IPL time.
Figure 15. Create OS/EM ISPF Tables
|
If the tables create process fails for any reason, you cannot simply reselect it from the menu. You need to delete any tables that may have been added to the new table library first.
Use ISPF option 3.1 (Library Utility) to delete any members which may have been added.
Note: The OS/EM supplied table library (TLIB) contains three members, OSEMCMDS, OSEMKEYS and OSEMVER. Be sure these members are not deleted, or are recopied into the table library if the CREATE process needs to be restarted.
After the table library has been cleaned-up, re-select the CREATE process from the maintenance menu.
The upgrade function parses a Query Report of your current OS/EM environment to determine which exits and/or optional features you are using and stores that information in ISPF tables. This function will also rebuild the initialization members.
Note: Since the upgrade function rebuilds the initialization members, it is advisable to execute this function before you IPL. Otherwise the initialization members which the install procedure places into parmlib will be empty, and no OS/EM features or user exits will be activated.
|
If you are going to run this function prior to your first IPL with the new OS/EM release, you may allow the Upgrade to execute the query directly. However, if you will be doing an IPL before the upgrade, you should create a Query Report using the ALL parameter and save the report so that the upgrade function will have access to it.
Note: To successfully run the upgrade function on a different machine from where OS/EM is currently running, create a query report and point the upgrade function to it.
The basic OS/EM Function provides for the dynamic loading, and reloading of all supported OS/390 and z/OS Exits. Exit points may be enabled and disabled dynamically; and, where appropriate, exit points may be limited to specific jobnames giving an installation a Quality Assurance or testing environment not previously available.
For a complete list, see Appendix A, "Supported Exits".
New to version 6.0, the OS/EM Autoinstall Feature greatly simplifies the installation and migration process by dynamically defining exit points and automatically loading both OS/EM and user exit modules.
Autoinstall provides the following functions:
Prior releases of OS/EM required the user to modify the JES2 initialization parameters to remove existing exit & module definitions and add the OS/EM exits and load modules. Additionally, the user exits had to be manually defined to OS/EM.
Autoinstall provides a much simpler and automated implementation process requiring no initial changes to the JES2 & OS/EM parameters in order to initialize OS/EM. For more information about migrating JES2 exits into an OS/EM environment, refer to the OS/EM Installation Guide.
Basic Exit Functions allow you to specify:
If you wish to enable OS/EM Extended Functions for an exitpoint, such as HSM Optimizer functions, but you do not have any user exits, you do not need to "visit" Basic Function Support. The OS/EM Extended Function interface ensures that the proper specifications are made in order to generate support, and you do not need to concern yourself with which exit point(s) supports the OS/EM Extended Function you are invoking.
Figure 17. Basic Exit Functions
Each of these paths is presented in the following sections:
This panel displays all of the JES2 exits in alphabetical order by exit name and tells you which JES2 user exits are being used. You may page up and down with PFK7 and PFK8.
Figure 18. JES2 Exit Selection List
|
Line commands are:
O | Define exit point options
This function controls the execution options for the exit point. "JES2 Exit Point Options" documents the resulting panel(s) and actions for this selection.
|
U | Define user exit modules
This function defines the user exit modules to be executed for the exit point and their execution sequence. "JES2 User Exit Modules" documents the resulting panel(s) and actions for this selection. |
The following panel is displayed when the when the O line command is entered for a JES2 exit point.
Figure 19. JES2 Exit Point Options
|
This field entry panel allows you to set the general execution options for the selected JES2 user exit.
ACTIVE - the defined JES2 exit(s) will be loaded at IPL time and executed when the exit is driven.
INACTIVE - the defined JES2 exit(s) will be loaded at IPL time but will not be executed when the exit is driven.
DISABLE - the defined JES2 exit(s) will not be loaded or executed.
FIRST - specifies that the OS/EM Extended Functions will be applied before any JES2 user exit modules are invoked for this exit point.
LAST - specifies that the OS/EM Extended Functions will be applied after the JES2 user exits are invoked for this exit point.
Note: If the exit being displayed does not have associated OS/EM Extended Functions, the OS/EM fields will be locked.
Enter up to three TSO User IDs or Notify Group Names to be notified if an ABEND occurs in any of the OS/EM extended functions for this JES2 exit point.
0 - the JES2 exit modules must be MVS re-entrant.
1 - the JES2 exit modules need not be MVS re-entrant.
Note: Key 0 programs may be loaded to LPA, key 1 programs will be loaded to CSA.
The return code (register 15) passed by the OS/EM exit interface if no User exit modules are present, if the exit module controller module is not loaded, or some other internal error has occurred. There is a default return code provided by the exit interface module for each JES2 exit point that is managed. Use this option with extreme caution.
OS/EM checks for valid return codes (register 15) being issued by user exit modules as defined by the IBM JES2 exit programming documentation for each exit point. The valid return codes for each IBM JES2 exit point are built into OS/EM. If anything is specified it completely replaces the IBM list. Use this option with extreme caution.
OS/EM checks for good return codes (register 15) being issued by user exit modules. A good return code allows subsequent user exit modules to be called. OS/EM provides a default list. For example, if a user exit for IEFUTL set the return code to zero (indicating the job processing is to be cancelled), then no other user exit modules would be called, including the optional features if they were to be called last. Check the IBM JES2 exit programming documentation to determine which return codes are valid for good return codes. If anything is specified it completely replaces the IBM list. Use this option with extreme caution.
OS/EM checks for a return code (register 15) being issued by a user exit module then disables that user exit module from being executed again. This option is primarily provided for JES3 support, but could be used for one time loading of tables, etc.
Note: When specifying return codes, enter any combination of the following values separated by blanks or commas:
The following panel is displayed when the U line command is entered for a JES2 user exit point.
Figure 20. JES2 User Exit Modules
|
This panel displays the modules defined for the selected JES2 exit point. The user can add & delete modules entries, update entries and change the execution sequence of the user exit modules.
Line commands are:
S | Select a user exit module entry
This function displays the user exit module field entry panel and allows the user to modify the definition parameters for that module. "JES2 User Exit Module Definition" documents the resulting panel(s) and actions for this selection.
|
I | Insert entry
This function adds an empty module entry immediately following the specified entry. This blank entry can then be defined by using the S line command to edit the module details.
|
D | Delete entry
This function deletes the specified module entry.
|
C | Copy entry
This function makes a copy of the specified entry. This line command is used in conjunction with the A and B line commands to control the location of the copied entry.
|
M | Move entry
This function relocates the specified entry within the module selection list. This line command is used in conjunction with the A and B line commands to control the new location of the moved entry.
|
R | Repeat entry
This function duplicates the specified entry and inserts it immediately following the specified entry.
|
A | Locate AFTER
This function locates a copied/moved entry immediately after the selected entry.
|
B | Locate BEFORE
This function locates a copied/moved entry immediately before the selected entry. |
Figure 21. JES2 User Exit Module Definition Panel
|
This field entry panel defines the characteristics of the user exit module to be executed.
YES - the defined JES2 exit module will be loaded at IPL time and executed when the exit is driven.
NO - the defined JES2 exit(s) will not be loaded or executed.
Enter up to three TSO User IDs or Notify Group Names to be notified if an ABEND occurs in the defined user exit module for this JES2 exit point.
Note: If a library is specified and the load module is not found in that library, OS/EM will not continue to search for the module and it will not be loaded.
Note: If a library is specified and the load module is not found in that library, OS/EM will not continue to search for the module and it will not be loaded.
This field allows the execution of the user exit module to be restricted to specific jobnames and/or jobname masks. Multiple names or masks should be separated by spaces.
This is particularly useful for limiting the scope of an exit module while it is being tested by restricting its execution to specific test jobs. When the module is to be put into production, the execution of the exit can be made global by removing the jobname limits.
Note: If the exit point does not support limits this field will be locked and no entry will be allowed.
This specifies the address of the jobname field in the parameters being passed to this user exit. Refer to the TSO TEST command for a discussion of addressing conventions for this parameter. The value contained at the specified address will be compared to the jobname specified by the limits entry above and if a match is found the exit is allowed to execute.
This fields provides an area to document the function of the user exit module.
The following table shows the allowable mask characters:
Qualifier | Description |
? | The question mark is used to unconditionally match any single character (except periods) where the question mark occurs in the specification. Multiples are allowed. |
1 | The ampersand is used to unconditionally match any single alpha character where the ampersand occurs in the specification. Multiples are allowed. |
% | The percent sign is used to unconditionally match any single numeric character where the percent sign occurs in the specification. Multiples are allowed. |
- | The dash is used to unconditionally match any preceding or succeeding character(s). Multiples are allowed. |
Figure 22. Basic JES3 Exit Selection
|
Line commands are:
O | Define exit point options
This function controls the execution options for the exit point. "JES3 User Exit Options" documents the resulting panel(s) and actions for this selection.
|
U | Define user exit modules
This function defines the user exit modules to be executed for the exit point and their execution sequence. "JES3 User Exit Modules" documents the resulting panel(s) and actions for this selection. |
This panel displays all of the JES3 exits in alphabetical order by exit name and tells you which JES3 user exits are being used. You may page up and down with PFK7 and PFK8.
The following panel is displayed when the when the O line command is entered for a JES3 user exit point.
Figure 23. JES3 User Exit Options
|
This field entry panel allows you to set the general execution options for the selected JES3 user exit.
ACTIVE - the defined JES3 exit(s) will be loaded at IPL time and executed when the exit is driven.
INACTIVE - the defined JES3 exit(s) will be loaded at IPL time but will not be executed when the exit is driven.
DISABLE - the defined JES3 exit(s) will not be loaded or executed.
FIRST - specifies that the OS/EM Extended Functions will be applied before any JES3 user exit modules are invoked for this exit point.
LAST - specifies that the OS/EM Extended Functions will be applied after the JES3 user exits are invoked for this exit point.
Note: If the exit being displayed does not have associated OS/EM Extended Functions, the OS/EM fields will be locked.
Enter up to three TSO User IDs or Notify Group Names to be notified if an ABEND occurs in any of the OS/EM extended functions for this JES3 exit point.
BAKR - The exit is called using the BAKR (Branch and Stack) instruction.
BALR - The exit is called using the BALR (Branch and Link Register) instruction.
ARET - The exit is called using the ACALL macro and control is returned with the ARETURN macro without the RC= parameter.
ARETRC - The exit is called using the ACALL macro and control is returned with the ARETURN macro with the RC= parameter.
Note: For more information about these program linkage options, refer to the IBM JES3 Customization manual.
The return code (register 15) passed by the OS/EM exit interface if no User exit modules are present, if the exit module controller module is not loaded, or some other internal error has occurred. There is a default return code provided by the exit interface module for each JES3 exit point that is managed. Use this option with extreme caution.
OS/EM checks for valid return codes (register 15) being issued by user exit modules as defined by the IBM JES3 exit programming documentation for each exit point. The valid return codes for each IBM JES3 exit point are built into OS/EM. If anything is specified it completely replaces the IBM list. Use this option with extreme caution.
OS/EM checks for good return codes (register 15) being issued by user exit modules. A good return code allows subsequent user exit modules to be called. OS/EM provides a default list. For example, if a user exit for IEFUTL set the return code to zero (indicating the job processing is to be cancelled), then no other user exit modules would be called, including the optional features if they were to be called last. Check the IBM JES3 exit programming documentation to determine which return codes are valid for good return codes. If anything is specified it completely replaces the IBM list. Use this option with extreme caution.
OS/EM checks for a return code (register 15) being issued by a user exit module then disables that user exit module from being executed again. This option is primarily provided for JES3 support, but could be used for one time loading of tables, etc.
Note: When specifying return codes, enter any combination of the following values separated by blanks or commas:
The following panel is displayed when the when the U line command is entered for a JES3 user exit point.
Figure 24. JES3 User Exit Modules
|
This panel displays the modules defined for the selected JES3 exit point. The user can add & delete modules entries, update entries and change the execution sequence of the user exit modules.
Line commands are:
S | Select a user exit module entry
This function displays the user exit module field entry panel and allows the user to modify the definition parameters for that module. "JES3 User Exit Module Definition" documents the resulting panel(s) and actions for this selection.
|
I | Insert entry
This function adds an empty module entry immediately following the specified entry. This blank entry can then be defined by using the S line command to edit the module details.
|
D | Delete entry
This function deletes the specified module entry.
|
C | Copy entry
This function makes a copy of the specified entry. This line command is used in conjunction with the A and B line commands to control the location of the copied entry.
|
M | Move entry
This function relocates the specified entry within the module selection list. This line command is used in conjunction with the A and B line commands to control the new location of the moved entry.
|
R | Repeat entry
This function duplicates the specified entry and inserts it immediately following the specified entry.
|
A | Locate AFTER
This function locates a copied/moved entry immediately after the selected entry.
|
B | Locate BEFORE
This function locates a copied/moved entry immediately before the selected entry. |
Figure 25. JES3 User Exit Module Definition Panel
|
This field entry panel defines the characteristics of the user exit module to be executed.
YES - the defined JES3 exit module will be loaded at IPL time and executed when the exit is driven.
NO - the defined JES3 exit(s) will not be loaded or executed.
Enter up to three TSO User IDs or Notify Group Names to be notified if an ABEND occurs in the defined user exit module for this JES3 exit point.
Note: If a library is specified and the load module is not found in that library, OS/EM will not continue to search for the module and it will not be loaded.
Note: If a library is specified and the load module is not found in that library, OS/EM will not continue to search for the module and it will not be loaded.
This field allows the execution of the user exit module to be restricted to specific jobnames and/or jobname masks. Multiple names or masks should be separated by spaces.
This is particularly useful for limiting the scope of an exit module while it is being tested by restricting its execution to specific test jobs. When the module is to be put into production, the execution of the exit can be made global by removing the jobname limits.
Note: If the exit point does not support limits this field will be locked and no entry will be allowed.
This specifies the address of the jobname field in the parameters being passed to this user exit. Refer to the TSO TEST command for a discussion of addressing conventions for this parameter. The value contained at the specified address will be compared to the jobname specified by the limits entry above and if a match is found the exit is allowed to execute.
This fields provides an area to document the function of the user exit module.
The following table shows the allowable mask characters:
Qualifier | Description |
? | The question mark is used to unconditionally match any single character (except periods) where the question mark occurs in the specification. Multiples are allowed. |
1 | The ampersand is used to unconditionally match any single alpha character where the ampersand occurs in the specification. Multiples are allowed. |
% | The percent sign is used to unconditionally match any single numeric character where the percent sign occurs in the specification. Multiples are allowed. |
- | The dash is used to unconditionally match any preceding or succeeding character(s). Multiples are allowed. |
This panel displays all of the MVS exits in alphabetical order by exit name and tells you which MVS user exits are being used. You may page up and down with PFK7 and PFK8.
Figure 26. Basic MVS Exit Selection
|
Line commands are:
O | Define exit point options
This function controls the execution options for the exit point. "MVS User Exit Options" documents the resulting panel(s) and actions for this selection.
|
U | Define user exit modules
This function defines the user exit modules to be executed for the exit point and their execution sequence. "MVS User Exit Modules" documents the resulting panel(s) and actions for this selection. |
The following panel is displayed when the when the O line command is entered for a MVS exit point.
Figure 27. MVS User Exit Options
|
This field entry panel allows you to set the general execution options for the selected MVS user exit.
ACTIVE - the defined MVS exit(s) will be loaded at IPL time and executed when the exit is driven.
INACTIVE - the defined MVS exit(s) will be loaded at IPL time but will not be executed when the exit is driven.
DISABLE - the defined MVS exit(s) will not be loaded or executed.
FIRST - specifies that the OS/EM Extended Functions will be applied before any MVS user exit modules are invoked for this exit point.
LAST - specifies that the OS/EM Extended Functions will be applied after the MVS user exits are invoked for this exit point.
Note: If the exit being displayed does not have associated OS/EM Extended Functions, the OS/EM fields will be locked.
Enter up to three TSO User IDs or Notify Group Names to be notified if an ABEND occurs in any of the OS/EM extended functions for this MVS exit point.
The return code (register 15) passed by the OS/EM exit interface if no User exit modules are present, if the exit module controller module is not loaded, or some other internal error has occurred. There is a default return code provided by the exit interface module for each MVS exit point that is managed. Use this option with extreme caution.
OS/EM checks for valid return codes (register 15) being issued by user exit modules as defined by the IBM MVS exit programming documentation for each exit point. The valid return codes for each IBM MVS exit point are built into OS/EM. If anything is specified it completely replaces the IBM list. Use this option with extreme caution.
OS/EM checks for good return codes (register 15) being issued by user exit modules. A good return code allows subsequent user exit modules to be called. OS/EM provides a default list. For example, if a user exit for IEFUTL set the return code to zero (indicating the job processing is to be cancelled), then no other user exit modules would be called, including the optional features if they were to be called last. Check the IBM MVS exit programming documentation to determine which return codes are valid for good return codes. If anything is specified it completely replaces the IBM list. Use this option with extreme caution.
OS/EM checks for a return code (register 15) being issued by a user exit module then disables that user exit module from being executed again. This option is primarily provided for JES3 support, but could be used for one time loading of tables, etc.
Note: When specifying return codes, enter any combination of the following values separated by blanks or commas:
The following panel is displayed when the U line command is entered for an MVS user exit point.
Figure 28. MVS User Exit Modules
|
This panel displays the modules defined for the selected MVS exit point. The user can add & delete modules entries, update entries and change the execution sequence of the user exit modules.
Line commands are:
S | Select a user exit module entry
This function displays the user exit module field entry panel and allows the user to modify the definition parameters for that module. "MVS User Exit Module Definition" documents the resulting panel(s) and actions for this selection.
|
I | Insert entry
This function adds an empty module entry immediately following the specified entry. This blank entry can then be defined by using the S line command to edit the module details.
|
D | Delete entry
This function deletes the specified module entry.
|
C | Copy entry
This function makes a copy of the specified entry. This line command is used in conjunction with the A and B line commands to control the location of the copied entry.
|
M | Move entry
This function relocates the specified entry within the module selection list. This line command is used in conjunction with the A and B line commands to control the new location of the moved entry.
|
R | Repeat entry
This function duplicates the specified entry and inserts it immediately following the specified entry.
|
A | Locate AFTER
This function locates a copied/moved entry immediately after the selected entry.
|
B | Locate BEFORE
This function locates a copied/moved entry immediately before the selected entry. |
Figure 29. MVS User Exit Module Definition Panel
|
This field entry panel defines the characteristics of the user exit module to be executed.
YES - the defined MVS exit module will be loaded at IPL time and executed when the exit is driven.
NO - the defined MVS exit(s) will not be loaded or executed.
Enter up to three TSO User IDs or Notify Group Names to be notified if an ABEND occurs in the defined user exit module for this MVS exit point.
Note: If a library is specified and the load module is not found in that library, OS/EM will not continue to search for the module and it will not be loaded.
Note: If a library is specified and the load module is not found in that library, OS/EM will not continue to search for the module and it will not be loaded.
This field allows the execution of the user exit module to be restricted to specific jobnames and/or jobname masks. Multiple names or masks should be separated by spaces.
This is particularly useful for limiting the scope of an exit module while it is being tested by restricting its execution to specific test jobs. When the module is to be put into production, the execution of the exit can be made global by removing the jobname limits.
Note: If the exit point does not support limits this field will be locked and no entry will be allowed.
This fields provides an area to document the function of the user exit module.
The following table shows the allowable mask characters:
Qualifier | Description |
? | The question mark is used to unconditionally match any single character (except periods) where the question mark occurs in the specification. Multiples are allowed. |
1 | The ampersand is used to unconditionally match any single alpha character where the ampersand occurs in the specification. Multiples are allowed. |
% | The percent sign is used to unconditionally match any single numeric character where the percent sign occurs in the specification. Multiples are allowed. |
- | The dash is used to unconditionally match any preceding or succeeding character(s). Multiples are allowed. |
Figure 30. Extended OS/EM Support
|
Dataset Name Groups are used to establish a list of dataset name mask(s) and/or dataset name(s). The group names are then used in various OS/EM functions instead of specifying the same dataset name or masks(s) in every function.
Build groups as needed. A dataset name or mask(s) may appear in more than one group since each OS/EM function will use Dataset Name Groups in a different way.
This dialog displays the list of Dataset Name Groups and provides the functions to create new groups as well as maintain and delete existing groups.
Figure 31. Dataset Name Group List
|
This panel displays the Dataset Groups that are currently defined to the OS/EM system. The PF7 & PF8 keys can be used to scroll up & down the list of groups.
Creating Group Names
Group Names are a maximum of eight characters in length, and may not start with the letters NO.
Each Group Name represents a group of one or more dataset names and/or mask(s). Dataset group names are used wherever OS/EM Extended Functions (such as the HSM Optimizer Direct to ML2 function) can use dataset name groups for its INCLUDE option.
There is no practical limit to the number of dataset name groups that may be created; especially since the groups may consist of dataset name(s)/mask(s) that represent a subset of your installation's total number of datasets.
It is suggested that you develop a naming scheme which will give some indication as to the dataset name group's use.
Note: Groups are stored internally in alphabetical order. Keep this in mind when creating group names. The OS/EM initialization member will also be built in alphabetic order. This determines OS/EM's search order when going through the dataset name(s) and mask(s) in each group to find a match. Dataset name(s) and mask(s) are searched in the order entered within the Dataset Name Group list. The first match that OS/EM finds will be the one used.
Panel Input Fields
YES | The Dataset Groups function is enabled and the defined groups are available to the OS/EM extended functions. |
NO | The Dataset Groups function is disabled. |
A - Add a Dataset Group (see "Add a DSN Group")
C - Change the Dataset Group (see "Change a DSN Group")
D - Deletes the Dataset Group (see "Delete a DSN Group")
T - Toggles the Dataset Group to/from being temporarily disabled (see "Temporarily Disable a DSN Group")
The following screen is displayed when the line command A (Add a group) is entered:
Figure 32. DSN Name Groups - Add Group Name
|
This is the name of the group to be defined. The group name can be up to 8 characters in length and must not start with NO. See Creating Group Names earlier in this section for more details about selecting a Dataset Group name.
This optional field provides an area for the user to provide comments relating to the group.
When the ENTER key is pressed, the Change Group panel is displayed (see Figure 33). This allows the user to define the dataset name(s) and mask(s) which will constitute the group.
The following panel is displayed in response to the Change and Add (after Dataset group is defined) line commands. This panel which will allow you to change the group description and to change, add and delete dataset name(s) and mask(s) for the group.
Figure 33. DSN Name Groups - Change
|
The change panel contains a scrollable area where the dataset name(s) or mask(s) are maintained. Each row consists of a single dataset name or mask.
The following line commands can be used:
D - Delete the entry
I - Insert a new dataset name / mask immediately following the selected line.
S - Select the entry for update. The dataset name / mask can be modified prior to pressing ENTER.
You may NOT change the group name. If you wish to change the name, you must create a new group with the desired name and enter all the dataset name(s) and mask(s) that will constitute the group. Delete the old group once the new group is active.
A dataset group is deleted by entering a D line command next to the desired group.
The group, and all the dataset name(s) and mask(s) comprising the group, will be deleted. OS/EM will check that the group is not referenced in any function and will not delete the group if it is, however no checks are made to determine whether the group is still referenced within an OS/EM initialization member. Initialization will produce undesired results if undefined Dataset Name Groups are referenced.
A series of panels will display after a group has been selected for deletion. These panels will are as follows:
Figure 34. DSN Name Groups - Wait
|
Figure 35. DSN Name Groups - Delete
|
Figure 36. DSN Name Groups - References
|
Figure 37. DSN Name Groups - No References
|
A dataset group may be temporarily disabled by entering a T in line command field before the desired group entry. The group definition will be retained but none of the dataset name(s) within the group will be available for OS/EM processing.
Disabled groups are indicated by a T immediately to the left of the group name. These groups may be enabled by entering a T in the line command field. The T line command acts as a disable/enable toggle.
While the definition is retained, the same considerations apply as if the group were being deleted (see "Delete a DSN Group").
As with all changes in the interface, you must remember to execute this change online for it to take effect.
Dataset name masks are created by using qualifiers within a dataset name. Valid qualifiers are:
Qualifier | Description |
? | The question mark is used to unconditionally match any single character (except periods) where the question mark occurs in the specification. Multiples are allowed. |
1 | The ampersand is used to unconditionally match any single alpha character where the ampersand occurs in the specification. Multiples are allowed. |
% | The percent sign is used to unconditionally match any single numeric character where the percent sign occurs in the specification. Multiples are allowed. |
- | The minus sign is used to unconditionally match a single node of the dataset name. Multiples are allowed. |
+ | The plus sign is used to unconditionally match all characters/nodes of the dataset name beyond where it is entered in the specification. A single plus sign may be specified. |
Example | Explanation |
AA | Specifies single-level dataset AA |
AA?AA | Specifies a single-level dataset name of five characters. The first and last two characters are AA. The third character can be anything: AA5AA,AABAA, etc. |
AA+ | Specifies any dataset name beginning with the two characters AA: AA55.TEST |
AA- | Specifies a single-level dataset name beginning with the characters AA: AA5PROD |
AA.+ | Specifies a two or more level dataset name. The first node is AA: AA.PROD.COMP |
AA.- | Specifies a two level dataset name. The first node is AA: AA.CICS |
-.AA | Specifies a two level dataset name. The last node is AA: PROD.AA |
SYS1.-.HRP1000 | Specifies a three-level dataset name. The first node is SYS1 and the last node is HRP1000 |
-.-.- | Specifies any three-level dataset name. This type of specification will match every three-level dataset name within your installation. |
GSAX.-.PRM | Specifies a three-level dataset name. The first node is GSAX and the last node is PRM. |
SYS?.- | Specifies a two-level dataset name. The first node starts with SYS and any other character. The second node can be anything: SYS1.LINKLIB |
SYS1- | Specifies a two-level dataset name. The first node starts with SYS and any other alphabetic character. The second node can be anything: SYSX.LINKLIB |
SYS%.- | Specifies a two-level dataset name. The first node starts with SYS and any other numeric character. The second node can be anything: SYS5.LINKLIB |
SYSX.-.EZT??? | Specifies a three-level dataset name. The first node is SYSX. The second node can be anything. The third node begins with EZT and any three characters: SYSX.CICS.EZT030 |
??SYSUT?.+ | Specifies a two or more level dataset name. The first node begins with any two characters, followed by SYSUT and any other single character. |
AA.+.BB | Specifies a three or more level dataset name. The first node is AA and the last node is BB. |
AA+AA | Specifies a single-level dataset name. The first two characters are AA and the last two characters are AA. The up to four middle characters can be anything. There has to be at least one middle character - AAAA will not match. |
SYSX.PROCLIB | A fully qualified dataset name. |
Volume name groups are used to establish a list of DASD volumes. These group names are then used in various OS/EM Extended Functions instead of specifying the same volume serial numbers in every function.
Build groups as needed. A volume serial number may appear in more than one group since each OS/EM Extended Function will use volume serial numbers in a different way.
This dialog displays the list of Volume Groups and provides the functions to create new groups as well as maintain and delete existing groups.
|
This panel displays the Volume Groups that are currently defined to the OS/EM system. The PF7 & PF8 keys can be used to scroll up & down the list of groups.
Creating Group Names
Group Names are a maximum of eight characters in length, and may not start with the letters NO.
Each Group Name represents a group of one or more DASD volumes. Masking characters may be used to define a generic range of volumes (see "Volume Masks" for more information on volume name masking). Volume group names are used wherever OS/EM Extended Functions (such as the HSM Optimizer defragmentation function) may require volume names on which to operate.
There is no practical limit to the number of volume groups that may be created; especially since groups may consist of volume serial mask(s) that represent a subset of your installation's total number of volumes.
It is suggested that you develop a naming scheme which will give some indication as to the volume group's use.
Note: Groups are stored internally in alphabetical order. Keep this in mind when creating group names. The OS/EM initialization member will also be built in alphabetic order. This determines OS/EM's search order when going through the volume name(s) and mask(s) in each group to find a match. Volume name(s) and mask(s) are searched in the order entered within the Volume Group list. The first match that OS/EM finds will be the one used.
Panel Input Fields
YES | The Volume Groups function is enabled and the defined groups are available to the OS/EM extended functions. |
NO | The Volume Groups function is disabled. |
A - Add a Volume Group (see "Add a Volume Group")
C - Change the Volume Group (see "Change a Volume Group")
D - Deletes the Volume Group (see "Delete a Volume Group")
T - Toggles the Volume Group to/from being temporarily disabled (see "Temporarily Disable a Volume Group")
The following screen is displayed when the line command A (Add a group) is entered:
Figure 39. Volume Name Groups - Add Group Name
|
This is the name of the group to be defined. The group name can be up to 8 characters in length and must not start with NO. See Creating Group Names earlier in this section for more details about selecting a Volume Group name.
This optional field provides an area for the user to provide comments relating to the group.
When the ENTER key is pressed, the Change Group panel is displayed (see Figure 40). This allows the user to define the volume name(s) and/or mask(s) which will constitute the group.
The following panel is displayed in response to the Change and Add (after Volume group is defined) line commands. This panel which will allow you to change the group description and to change, add and delete volume name(s) and mask(s) for the group.
Figure 40. Volume Name Groups - Change
|
The change panel contains a scrollable area where the volume name(s) or mask(s) are maintained. Each row consists of a single volume name or mask.
The following line commands can be used:
D - Delete the entry
I - Insert a new volume name / mask immediately following the selected line.
S - Select the entry for update. The volume name / mask can be modified prior to pressing ENTER.
You may NOT change the group name. If you wish to change the name, you must create a new group with the desired name and enter all the volume name(s) and mask(s) that will constitute the group. Delete the old group once the new group is active.
A Volume Group is deleted by entering a D line command next to the desired group.
The group, and all the volume name(s) and mask(s) comprising the group, will be deleted. OS/EM will check that the group is not referenced in any function and will not delete the group if it is, however no checks are made to determine whether the group is still referenced within an OS/EM initialization member. Initialization will produce undesired results if undefined Volume Name Groups are referenced.
A series of panels will display after a group has been selected for deletion. These panels will are as follows:
Delete a complete volume group by entering a D in the CMD field before the group.
The group, and all the dataset names and masks comprising the group, will be deleted. OS/EM will check that the group is not referenced in any other panel and will not delete the group if it is, however no checks are made to determine whether the group is still referenced within OS/EM initialization member. Initialization will produce undesired results if undefined Volume Groups are referenced.
A volume group may be temporarily disabled by entering a T in line command field before the desired group entry. The group definition will be retained but none of the volume name(s) within the group will be available for OS/EM processing.
Disabled groups are indicated by a T immediately to the left of the group name. These groups may be enabled by entering a T in the line command field. The T line command acts as a disable/enable toggle.
While the definition is retained, the same considerations apply as if the group were being deleted (see "Delete a Volume Group").
As with all changes via the interface, you must remember to execute the change online to have it take effect.
Volume masks are created by using qualifiers within a volume serial number. Valid qualifiers are:
Qualifier | Description |
? | The question mark is used to unconditionally match any single character (except periods) where the question mark occurs in the specification. Multiples are allowed. |
1 | The ampersand is used to unconditionally match any single alpha character where the ampersand occurs in the specification. Multiples are allowed. |
% | The percent sign is used to unconditionally match any single numeric character where the percent sign occurs in the specification. Multiples are allowed. |
- | The dash is used to unconditionally match any preceding or succeeding character(s). Multiples are allowed. |
Example | Explanation |
VOL0%% | Matches any serial number that begins with VOL0 and any two numeric characters: VOL010 |
&%%%%% | Matches any serial number that begins with any alpha character and five numbers |
The HSM Optimizer allows you to more precisely control DFHSM migration and backup. DFHSM as supplied by IBM in both SMS and Non-SMS environments provides a limited set of specifications in determining which datasets will, or will not, be migrated or backed up. Complete volumes may be excluded, datasets may be excluded from migration, and a residency factor (the number of days since last reference) may be specified. The HSM Optimizer, in contrast allows multiple residency specifications, the dataset size as a factor at migration time versus allocation time in DFSMS, and a relationship between a dataset's size and specification in a dataset name list.
DFHSM, as currently supplied by IBM, offers only coarse control over which datasets get migrated and backed up. Only one aging factor may be supplied, and this factor applies to all datasets, except those explicitly excluded from processing. If your installation has a very aggressive aging factor, say only one or two days, and a high percentage factor for migration 'kick-in', datasets will be continually migrated and recalled, with no regard to their usage or size. Further, the same sort of factors apply to DFHSM's Level-1 storage. If your Level-1 storage is on the small side and a large dataset migrates, you might find that many of your recalls will be coming from Level-2 storage. If this level is tape, there will be delays while the tape is found and mounted.
The HSM Optimizer has been designed to give you much finer control over the migration and backup process. It also gives you the option to reblock datasets when they are recalled, and to automatically defragment DASD volumes based on supplied criteria. The following details the HSM Optimizer's functions (it is assumed that you have some familiarity with DFHSM processing and the setting of options in the ARCCMDxx parm member):
We recommend that you run the report system before attempting to specify any of the HSM Optimizer's controls. Once you have determined a strategy, implement it in phases. Also, to get the most effect from the HSM Optimizer, set DFHSM's aging factors to one day and specify a low THRESHOLD for the volumes, see Programmer's Guide. The HSM Optimizer only "sees" those datasets that DFHSM considers eligible for processing. By setting aggressive factors in your DFHSM ARCCMDxx parm member, you let the HSM Optimizer determine whether a dataset should be processed. Consider, too, removing all specifications for datasets that you currently set as NOMIG or COMMANDMIGRATION via the SETMIG statement. Handle such datasets via the HSM Optimizer. This will place all DFHSM control in one place.
Directed Recall requires that the following DFHSM SETSYS option be present in DFHSM ARCCMDxx member.
SETSYS RECALL(PRIVATE(UNLIKE))
Note: If the dataset is a DFSMS dataset this option will not be invoked. If QUICKPOOL allocation is in effect, you must specify DIRECTRECALL. DFHSM RECALL/RECOVERY will most likely fail if this option is not enabled.
|
Each of the Optimizer's functions is detailed in the following sections. Many of the panels are repetitious. For example, any function that is driven by a dataset group list has a panel on which you specify the dataset name groups. The interface presents this panel, with an appropriate title, for each function; but the panel will be detailed only once.
Each of these paths is presented in the following sections:
Backup control allows you to specify datasets which should not be backed up. DFHSM processing will back up every dataset that has changed or is new. Many installations have datasets that are created, used, and overwritten the next time they are used. There really is no need to back up such datasets.
The first panel presented (Figure 42) allows you to ENABLE or DISABLE backup control. You also specify whether any EXCLUDE dataset name group list(s) will be a part of backup processing.
The EXCLUDE list consists of previously defined dataset name groups. Any dataset that can be resolved to a group on the list will NOT be backed up. Be careful that datasets you are excluding should not be backed up. Note that the displayed description is the description you entered, if any, when you created the dataset name group. It is displayed here for documentation purposes.
Figure 42. HSM Optimizer Backup Control
|
Field entry is as follows:
Entering YES in this field will enable OS/EM's extended processing.
Entering NO will disable this function.
The area in which the group names are entered is a scrollable area. Normal ISPF commands for scrolling are in effect.
Group names are up to eight characters in length. They are created by using the Define Dataset Name Groups function (see "Define Dataset Name Groups"). Each group represents a set of dataset name masks or fully qualified dataset names.
Note: The dataset group names that you enter must have been properly defined by using the Define Dataset Name Groups option before they can be accepted on this panel.
To add a dataset name group, enter an A in the CMD field and overtype the name of the dataset name group in the group field (overtyping will not alter the old entry).
If you enter a group name which does not exist, a popup window (see Figure 43) will be displayed which lists all available groups. Use the 'S' line command to select as many groups as needed.
Note: The initial display of the Group Selection List will be positioned to the closest match for the name you tried to enter. Use the Up and Down scroll PF keys to reposition the display.
To delete an existing entry, enter a D in the CMD field.
Figure 43. Select DSN Groups for Backup Exclusion Pop-up
|
OS/EM's Defragmentation control automates the "compaction" of DASD volumes.
DASD volumes become fragmented over time. This can eventually lead to allocation failure because, while the total free space on the volume may be adequate, too many secondary allocations would be required to satisfy an allocation request.
DFHSM returns a "fragmentation index" every time it does space management on a volume. OS/EM uses this index, based on your specification, to determine when to issue a DFDSS defrag procedure.
Invoking the Optimizer's Defragmentation Control presents the following panel:
Figure 44. Defragmentation Control/Procedure
|
Entering YES in this field will enable OS/EM's extended processing.
Entering NO in this field will disable OS/EM's extended processing.
You may also specify the procedure name which will be used to start the DFDSS procedure which will actually defragment the DASD volume. The default name is OS$DFRAG. Any other procedure name can be specified. The procedure must exist in your installation's procedure library. Processing will not be affected if the procedure does not exist; however, the defragmentation process will not be done.
The procedure should be a regular DFDSS defragmentation procedure. It must be created with a single symbolic parameter - V - which becomes the serial number of the volume to be defragmented (this may be passed in the PARM of the DFDSS EXECUTE statement). Each execution of the procedure will defragment a single volume. It is suggested that the executed procedure not actually be the DFDSS procedure, but a procedure which submits a batch job, with the appropriate parameter, that is the DFDSS defragment job. This will prevent your system from being flooded with many started tasks when a large number of volumes are being defragmented.
Setting Global Time values
Default: -- Time -- Start End 0900 : 1400
Specifies the time of day during which OS/EM will issue the start for the defrag procedure. This time will apply to every defrag level which does not have its own time specification.
The time is based on a 24 hour clock. The first time parameter specifies the earliest start time that the defrag procedure will be submitted; the second time parameter, the latest time that the defrag procedure will be submitted.
For example, if you enter 0900 as the first time and 1400 as the second time, the defrag procedure will be submitted anywhere from 9AM to 2PM. The day of submission will depend on the DAYS that you specify.
Setting Global Days values
Default: ---------- DAYS ----------- Mon Tue Wed Thu Fri Sat Sun Y _ _ Y _ Y _
Specifies the day(s) of the week on which OS/EM will issue the start for the defrag procedure. The day(s) will apply to any defrag level which does not specifically set its own days.
Enter Y beneath the appropriate column for the desired day. Each day must be specifically entered. If you do not wish a particular day to be active, do not enter it.
If you enter MON, THU, SAT, for example, the defrag procedure will be submitted on these days.
Specify the defrag level you wish to customize by placing an S in the CMD column. Eight levels are available, each with its own fragmentation index, times, days, and list of volume groups. There is no inherent meaning to the level numbers.
The defrag level parameter may be used in a couple of different ways. You may code all 8 levels with the same fragmentation index, but supply a different volume list for each level. This creates a series of volume pools that may be defrag'd on different days of the week by specifying the proper TIME and DAY parameters. Using this technique enables you to defrag your volumes on a weekly basis, spreading the process throughout the week.
The second method would be to code different fragmentation indexes for each level, specifying appropriate volumes and TIME and DAY parameters. This method allows you to defrag volumes based on their content and usage (i.e., some volumes may have large files allocated to them; therefore, you might want to defrag such volumes more frequently to minimize secondary allocation).
(CMD = (S)elect for update (G)roup update) Fragmentation -- Time -- ---------- DAYS ----------- VOL CMD Lvl Enable Index Start End Mon Tue Wed Thu Fri Sat Sun Groups ' 1 Y 350_ 2300 : 0300 _ Y _ Y _ Y _ Y ' 2 _ ____ ____ : ____ _ _ _ _ _ _ _
Enter a Y in the Enable column to turn on that level.
Specify the fragmentation index that is to be used to determine whether a list of volumes should be defragmented. The number ranges from 0 (the default) to 999. A 0 implies that no defragmentation is to be done. 999 indicates that the volume is very fragmented: equivalent to half a volume's worth of one-track datasets placed on every other track.
An appropriate initial value would be between 350 and 500. However, the value most appropriate to your installation must be determined by the type of datasets--large or small--on the volume, and the frequency of dataset allocation on the volume.
Specifies the time of day during which OS/EM will issue the start for the defrag procedure. This time applies to the specific defrag level being configured.
The time is based on a 24 hour clock. The first time parameter specifies the earliest start time that the defrag procedure will be submitted; the second time parameter, the latest time that the defrag procedure will be submitted.
For example, if you enter 0400 as the first time and 1400 as the second time, the defrag procedure will be submitted anywhere from 4AM to 2PM. The day of submission will depend on the DAYS that you specify.
Specifies the day(s) of the week on which OS/EM will issue the start for the defrag procedure. The day(s) will apply to the specific defrag level which is being configured.
Enter Y beneath the appropriate column for the desired day. Each day must be specifically entered. If you do not wish a particular day to be active, do not enter it.
If you enter MON, THU, SAT, for example, the defrag procedure will be submitted on these days.
If you wish to INCLUDE a list of volume groups to limit the effect of the defrag level, enter G in the CMD field. The Y in the VOL Groups column will either appear or disappear based on whether you have an INCLUDE group.
If no volume list is present for a defrag level, it is assumed that the level applies to ALL volumes.
When you press the enter key after using the G line command, a pop-up window (see Figure 45) will appear where you may specify the volume groups to be used for this defrag level.
The following is an example of how to use levels.
LVL | Index | Time | Days | Volume Groups |
---|---|---|---|---|
1 | 400 |
| SUN | VOLSUN |
2 | 400 |
| MON | VOLMON |
3 | 400 |
| TUE | VOLTUE |
4 | 400 |
| WED | VOLWED |
5 | 400 |
| THU | VOLTHU |
6 | 400 |
| FRI | VOLFRI |
7 | 400 |
| VOLSAT | |
8 | 700 | S=0100 E=0500 | SUN MON TUE WED THU FRI SAT | VOLSUN VOLMON VOLTUE VOLWED VOLTHU VOLFRI VOLSAT |
Figure 45. Defragmentation VOL Group Controls with "POPUP" screen
|
Use the "POPUP" panel to add or delete the VOL Groups.
The area in which the group names are entered is a scrollable area. Normal ISPF commands for scrolling are in effect.
Group names are up to eight characters in length. Create them by using the Define Volume Groups function (see "Define Volume Groups"). Each group represents a set of volume serial masks or fully qualified volume serial numbers.
The volume group names that you enter must have been properly defined by using the VOL function before they will be accepted on this panel.
DFHSM Delete-by-Age processing is rather draconian in its approach: a single aging factor may be specified after which ALL datasets exceeding this age are deleted.
OS/EM provides finer control by allowing the size of the dataset to be a determining factor, along with various aging factors OS/EM also allows specified datasets to not be deleted, even if they exceed the aging factor.
In order to use this extended function, you must activate DFHSM's Delete-By-Age processing in the appropriate ARCCMDxx parm member. The value must be set low since HSM Optimizer processing actually determines whether a dataset should be deleted. For example, if you wish to start aging datasets after 1 day, the Delete-By-Age parameter in ARCCMDxx must specify 1 day.
Figure 46. Delete-By-Age Hold Options
|
Field entry is as follows:
Enter YES to enable extended processing; NO to disable extended processing.
There are two line commands available, S and G. You must enter a S on any line you make changes to for the changes to be saved.
Enter a S in the CMD field of the line that contains the number of days specified datasets are to be held on Level-0 storage before being eligible for deletion.
OS/EM allows the specification of 22 different aging factors.
Enter Y in the ENABLE area for the specified number of days.
Enter N in the area to deactivate processing for this number of days.
To update the DSN groups, use the G line command. This will cause a POPUP window (see Figure 47) to appear where you may update the groups.
Use these values, along with MAXSIZE, OR/AND connective and INCLUDEd datasets, to determine how long unreferenced datasets will be held before they are deleted.
Enter maximum dataset size to hold in K bytes, blank to suppress MAXSIZE criteria.
Only datasets that are larger than this value will be deleted once they exceed the aging factor.
This value is in K bytes. Thus, a value of 1000 means that datasets less than or equal to 1,024,000 bytes will not be deleted even if they are older than the specified number of days.
If you do not wish to hold datasets based on their size, leave this value blank, or blank it if you have already entered a value.
Enter logical connection between MAXSIZE and INCLUDEd datasets: OR = MAXSIZE datasets or specified datasets; AND = MAXSIZE datasets and specified datasets; blank = suppress OR/AND criteria.
A logical connection between the MAXSIZE specification and INCLUDEd datasets is established by entering OR or AND.
If you enter OR, datasets will be held if they do not exceed the MAXSIZE value, OR if the dataset is in any of the dataset name groups that are in the INCLUDE list.
If you enter AND, datasets will be held if they do not exceed the MAXSIZE value, AND the dataset is in any of the dataset name groups that are in the INCLUDE list.
Figure 47. Delete-By-Age Hold Options with DSN Groups "POP-UP" screen
|
Use the "POP-UP" panel to add or delete the DSN Groups.
The area in which the group names are entered is a scrollable area. Normal ISPF commands for scrolling are in effect.
Group names are up to eight characters in length. Create them by using the Define Dataset Name Groups function (see "Define Dataset Name Groups"). Each group represents a set of dataset name masks or fully qualified dataset names.
Note: The dataset group names that you enter must have been properly defined by using the Define Dataset Name Groups option before they can be accepted on this panel.
DFHSM can be directed to delete datasets that it has backed up, but as with Delete-By-Age processing, it is an all or nothing approach.
OS/EM provides finer control by allowing the size of the dataset to be a determining factor, along with various aging factors OS/EM also allows specified datasets to not be deleted, even if they exceed the aging factor.
In order to use this extended function, you must activate DFHSM's Delete-If-Backed-Up processing in the appropriate ARCCMDxx parm member.
Figure 48. Delete-If-Backed-UP Hold Options
|
Field entry is as follows:
Enter YES to enable extended processing; NO to disable extended processing.
Enter an S to select the days specified datasets are to be held on Level-0 storage before being eligible for deletion.
Enter a Y in the ENABLE area to activate hold processing for the specified number of days.
OS/EM allows the specification of 22 different aging factors.
Use these values, along with MAXSIZE, OR/AND connective and INCLUDEd datasets, to determine how long unreferenced datasets will be held before they are deleted.
Enter maximum dataset size to hold in K bytes, blank to suppress MAXSIZE criteria.
Only datasets that are larger than this value will be deleted once they exceed the aging factor.
This value is in K bytes. Thus, a value of 1000 means that datasets less than or equal to 1,024,000 bytes will not be deleted even if they are older than the specified number of days.
If you do not wish to hold datasets based on their size, leave this value blank, or blank it if you have already entered a value.
Enter logical connection between MAXSIZE and INCLUDEd datasets: OR = MAXSIZE datasets or specified datasets; AND = MAXSIZE datasets and specified datasets; blank = suppress OR/AND criteria.
A logical connection between the MAXSIZE specification and INCLUDEd datasets is established by entering OR or AND.
If you enter OR, datasets will be held if they do not exceed the MAXSIZE value, OR if the dataset is in any of the dataset name groups that are in the INCLUDE list.
If you enter AND, datasets will be held if they do not exceed the MAXSIZE value, AND if the dataset is in any of the dataset name groups that are in the INCLUDE list.
If you wish to INCLUDE a list of datasets which will not be deleted, enter G in the CMD field. When the enter key is pressed, a pop-up window will be displayed where you can specify the group names.
Figure 49. Delete-If-Backed-Up Hold Options with DSN Group "POP-UP" screen
|
Use the "POP-UP" panel to add or delete the DSN Groups.
The area in which the group names are entered is a scrollable area. Normal ISPF commands for scrolling are in effect.
Group names are up to eight characters in length. They are created by using the Define Dataset Name Groups function (see "Define Dataset Name Groups" ). Each group represents a set of dataset name masks or fully qualified dataset names.
To add a dataset name group, enter an A in the CMD field and the name of the dataset name group in the group field. Type a description of the group in the description field. Overtype any existing entry - the old entry will not be altered.
To delete an existing entry, enter a D in the CMD field.
Note: The dataset group names that you enter must have been properly defined by using the Define Dataset Name Groups option before they can be accepted on this panel.
Datasets that are not used on a regular basis, say daily or weekly, should probably be migrated. However, you do not want to exhaust ML1 storage with such datasets. ML2 storage (tape) is provided by DFHSM for this purpose. But DFHSM only migrates from ML0-ML1-ML2, unless you manually issue a DFHSM command to migrate a particular dataset directly to ML2 storage. The Optimizer can automate this process.
Selecting DIRML2 presents you with the MIGRATION CONTROL:DIR TO ML2 panel (Figure 50). This panel is similar to those already shown except that there are no HOLD days to specify. You ENABLE or DISABLE DIRML2 processing on this panel; specify a MINIMUM dataset size (datasets equal to or greater than this size will be eligible for DIRML2 processing); specify the logical connective between size and the dataset name group list, and whether the dataset name group list is an INCLUDE list or an EXCLUDE list (only one type of list may be created).
Direct migration to Level-2 storage may be controlled by use of this option. A minimum size for such datasets may be established; and, via an OR/AND connective, either include or exclude datasets from such processing.
The intent of this option is to move large, infrequently used datasets directly to Level-2 storage (usually tape) freeing Level-1 storage for smaller frequently used datasets.
Figure 50. Migration Control: Direct to ML2
|
Field entry is as follows:
Enter YES to enable Migration control.
Enter minimum dataset size (in K bytes) to migrate directly to ML2; blank to suppress MINSIZE criteria.
Only datasets that are equal to or larger than this specified size will be eligible for direct migration to Level-2 storage.
If you do not wish to direct datasets to Level-2 storage based on their size, leave this field blank or blank it out if you have already entered a value.
Enter logical connection between MINSIZE and INCLUDED/EXCLUDED datasets.
A logical connection between the MINSIZE specification and the INCLUDEd or EXCLUDEd datasets is established by entering OR/AND.
If you enter OR, datasets will be selected if they are MINSIZE OR in the INCLUDE or EXCLUDE lists.
If you enter AND, datasets will be selected if they are either MINSIZE AND also in the INCLUDE or EXCLUDE lists.
Enter INCLUDE to include Dataset Name Groups, or EXCLUDE to exclude Dataset Name Groups.
Note: You may have either an INCLUDE list or an EXCLUDE list, but NOT both.
The area in which the group names are entered is a scrollable area. Normal ISPF commands for scrolling are in effect.
Group names are up to eight characters in length. Create them by using the Define Dataset Name Groups function (see "Define Dataset Name Groups"). Each group represents a set of dataset name masks or fully qualified dataset names.
Note: The dataset group names that you enter must have been properly defined by using the Define Dataset Name Groups option before they can be accepted on this panel.
OS/EM's early batch recall function will cause DFSMSHSM to recall needed datasets while the job waits in the input queue, thus keeping initiators available for jobs which can begin execution immediately.
Note: This control is unique to each JES2 subsystem you have defined. Use Primary Option 6 - Set JES name to control which JES2 subsystem you are updating (see "Set JES2 Name".)
Figure 51. Early Batch Recall Control
|
The options and their meanings follow:
Disallow Local Recalls? | OS/EM will not issue the recall request on the local system. |
Ignore Failed Recalls? | OS/EM will allow the job to be selected for execution even if an early recall has failed. |
Only Recall First Dataset? | OS/EM will only issue a recall request for the first dataset of the job. |
Send Message to Job Owner? | OS/EM will send a TSO message to the owner of a job that is being held because of pending recalls. |
Send Message to Joblog? | OS/EM will write a message to the joblog showing the datasets recalled. |
Send Message to Console? | OS/EM will write message OS$2HM251 to the console. OS$2HM251 OS/EM INITIATING HRECALL OF dataset. |
Wait for: Only Tape Datasets? | OS/EM will only block execution of the job if the job had datasets migrated to tape. |
Wait for: Only First Recall? | OS/EM will only block execution of the job until the first migrated dataset has been recalled. |
Wait for: Condition Only? | OS/EM will block execution of the job until all migrated datasets in COND=ONLY steps have been recalled. |
Recheck Interval for Generationdatagroups | Specify the time in seconds that OS/EM will wait before rechecking the status of a HRECALL request. Default time is 30 seconds. |
REcheck Interval for Normal Datasets | Specify the time in seconds that OS/EM will wait before rechecking the status of a HRECALL request. Default time is 300 seconds. |
This control is applied to datasets which have been allocated but never opened, which leaves an unknown DSORG causing DFSMSHSM to bypass the dataset during migration and backup processing.
Forcing the DSORG to PS allows DFSMSHSM to process the dataset.
Figure 52. Entry panel for Force DSORG to PS
|
Enter YES to force the DSORG, or enter NO to leave unopened new datasets alone.
OS/EM's migration control extends standard DFHSM migration control by considering the size of datasets and the number of days a dataset has been resident on Level-0 storage.
In order to use this extended function, you must activate DFHSM's migration processing in the appropriate ARCCMDxx parm member.
|
Migration from primary storage (referred to as Level-0 or ML0 storage) to secondary storage (referred to as Level-1 or ML1 storage) moves datasets from primary to secondary storage when they exceed the aging criterion. Such datasets are commonly compressed to maximize secondary storage utilization. DFHSM also supports Small-Dataset-Packing (SDSP), which allows small datasets to become records within a single VSAM file which you establish.
Field entry is as follows:
Enter YES to enable extended processing; NO to disable extended processing.
OS/EM allows the specification of 22 different aging factors.
Use these values, along with MAXSIZE, OR/AND connective and INCLUDEd datasets, to determine how long unreferenced datasets will be held before they are migrated.
Enter Y in the ENABLE area to activate hold processing for the specified number of days.
Note: You must use either the S or G line command for any updates to take place.
Enter maximum dataset size to hold in K bytes, blank to suppress MAXSIZE criteria.
Only datasets that are larger than this value will be migrated once they exceed the aging factor.
This value is in K bytes. Thus, a value of 1000 means that datasets less than or equal to 1,024,000 bytes will not be migrated even if they are older than the specified number of days.
If you do not wish to hold datasets based on their size, leave this value blank, or blank it if you have already entered a value.
Enter logical connection between MAXSIZE and INCLUDEd DSN Groups; OR = MAXSIZE datasets or specified DSN Groups; AND = MAXSIZE datasets and specified DSN Groups; blank = suppress OR/AND criteria.
A logical connection between the MAXSIZE specification and INCLUDEd DSN Groups is established by entering OR or AND.
If you enter OR, datasets will be held if they do not exceed the MAXSIZE value, OR if the dataset is in any of the dataset name groups that are in the INCLUDE list. The default is OR.
If you enter AND, datasets will be held if they do not exceed the MAXSIZE value, AND if the dataset is in any of the dataset name groups that are in the INCLUDE list.
Enter INC to include Dataset Name Groups, EXC to exclude Dataset Name Groups.
Note: You may have either an INCLUDE list or an EXCLUDE list, but NOT both.
If you wish to INCLUDE a list of datasets which will not be migrated, enter Y in this field.
Enter N to not have an INCLUDE list.
Figure 54. Migration Hold Options with "POPUP" screen
|
The area in which the group names are entered is a scrollable area. Normal ISPF commands for scrolling are in effect.
Group names are up to eight characters in length. Create them by using the Define Dataset Name Groups function (see "Define Dataset Name Groups"). Each group represents a set of dataset name masks or fully qualified dataset names.
Note: The dataset group names that you enter must have been properly defined by using the Define Dataset Name Groups option before they can be accepted on this panel.
Migration from secondary storage to the final level (referred to as Level-2 or ML2 storage, and most commonly tape) moves datasets when ML1 storage exceeds its defined threshold and room must be made to hold the datasets migrated to ML1 storage from ML0 storage (Primary).
Each of these functions presents a first panel such as illustrated by the Delete-by-Age Control panel (Figure 46). The function (DBA, DBU, MIG, ML2) is ENABLED by specific HOLD day. Each specific HOLD day may be ENABLED--enter a Y on the ENABLE area of the HOLD day you desire (use Y to also change a HOLD day specification)--or DISABLED by entering a N in the ENABLE area of the HOLD day selection. You may change the description connected with a particular HOLD day at any time. You might consider using the description to briefly annotate the criteria being used for HOLD day. The illustration shows HOLD day 3 with such a description.
Note: Hold day 9999 represents a special case. A dataset placed in this HOLD day will never age. The effect is identical to specifying NOMIG for the dataset in your ARCCMDxx member.
Figure 47 shows the panel presented when you enable a particular HOLD day. The title area of the panel will indicate the HOLD day you are currently specifying. If you find that you have chosen the wrong day, CANCEL the panel and you will be returned to the HOLD day selection list.
The action of a particular HOLD day may be qualified by a maximum dataset size to hold, and a list of datasets which should be included with the HOLD day. If you specify none of these options, ALL datasets, not otherwise qualified, will be aged for possible processing. This still extends DFHSM processing by giving you multiple aging factors. However, the best use of a HOLD day is to specify a maximum dataset size and specify a dataset name list.
The maximum dataset size you specify indicates that if a particular dataset is less than or equal to the maximum size, it will not be processed (the size is expressed in K).
Note: To be effective, this number should decrease as the number of days a dataset is held (not deleted or migrated) increases. That is, large datasets should be held for a few days; small datasets can be effectively held for a longer period of time. While there is no requirement that this policy be implemented, holding large datasets beyond when DFHSM processing would normally remove eligible datasets from a volume defeats the purpose of removing such datasets--maximizing available DASD space.
If a particular dataset resolves to the include list, it will not be processed. You can connect the two criteria with the AND/OR connective.
If you specify that a dataset name group INCLUDE list should be part of the HOLD day's processing, you will be presented with the standard DATASET NAME GROUPS panel already presented (Figure 31).
The relationship between these various elements is best demonstrated
with an example. Assume the following has been established for the
indicated HOLD days:
DAY | Maxsize | Connective | Include |
---|---|---|---|
9999 | 50 | AND | SYSXGRP P3RDINS |
2 | 130 | OR | TESTGRP |
10 |
|
| DEVTEMP |
15 | 30 | AND | TESTT |
Assume, also, that the following dataset name groups have been
previously defined:
GROUP | Datasets, dataset name masks |
---|---|
SYSXBRP | SYSX.+ All datasets that begin with SYSX. |
P3RDINS | P3RD.INSTALL.+ All datasets that begin with PR3D.INSTALL |
TESTGRP | TEST.+ All datasets that begin with TEST |
DEVTMP | $DEV%%%.TEMP.+ Datasets that begin with $DEV |
TESTT | TEST.T?????.- Datasets which begin with TEST.T plus any five characters and one other node name. |
PRODW | PROD.WORKS%% sys%%%%.- Datasets which begin with PROD as the first node |
Although the above examples show only one dataset name mask in each group, dataset name groups may contain multiple fully qualified dataset names and/or multiple dataset name masks.
Determining whether a particular dataset should be processed is as follows:
While limited, this example shows the definitions you need to effectively manage DFHSM. Remember, the goal is to ensure that active datasets are always available in a timely manner and to maximize your primary storage utilization.
ML2 usage note: Remember that datasets will not be held if Level-2 storage is not tape. The purpose for keeping datasets from migrating to tape Level-2 storage is to hasten their recall. If Level-2 storage is another DASD device, such a consideration does not apply. Issuing the FREEVOL AGE(0) command will also bypass ML2 hold processing. It is assumed that if you issue this command, the Level-1 volume is to be cleared.
Figure 55. MIG Level-2 Hold Options with "POPUP" screen
|
Field entry is as follows:
Enter YES to enable extended processing; NO to disable extended processing.
This field must be completed before you will be allowed to leave the panel.
Enter a S in the CMD field to to select the hold days for datasets to be held on Level-1 storage before being eligible for migration.
Enter a Y in the ENABLE area to activate hold processing for the specified number of days.
OS/EM allows the specification of 22 different aging factors.
Use these values, along with MAXSIZE, OR/AND connective and INCLUDEd datasets, to determine how long unreferenced datasets will be held before they are migrated.
Enter maximum dataset size to hold in K bytes; blank to suppress MAXSIZE criteria.
Only datasets that are larger than this value will be deleted once they exceed the aging factor.
This value is in K bytes. Thus, a value of 1000 means that datasets less than or equal to 1,024,000 bytes will not be migrated even if they are older than the specified number of days.
Leave this value blank if you do not wish to hold datasets based on their size, or blank it out if you have already entered a value.
Enter logical connection between MAXSIZE and INCLUDEd DSN Groups; OR = MAXSIZE datasets or specified DSN Groups; AND = MAXSIZE datasets and specified DSN Groups; blank = suppress OR/AND criteria.
A logical connection between the MAXSIZE specification and INCLUDEd DSN Groups is established by entering OR or AND.
If you enter OR, datasets will be held if they do not exceed the MAXSIZE value, OR if the dataset is in any of the dataset name groups that are in the INCLUDE list. The default is OR.
If you enter AND, datasets will be held if they do not exceed the MAXSIZE value, AND if the dataset is in any of the dataset name groups that are in the INCLUDE list.
Enter INC to include Dataset Name Groups, EXC to exclude Dataset Name Groups.
Note: You may have either an INCLUDE list or an EXCLUDE list, but NOT both.
If you wish to INCLUDE a list of datasets which will not be deleted, enter G in the CMD field; a pop-up window will open to allow entry of the DSN Groups.
Use the "POPUP" panel to add or delete the DSN Groups.
The area in which the group names are entered is a scrollable area. Normal ISPF commands for scrolling are in effect.
Group names are up to eight characters in length. They are created by using the Define Dataset Name Groups function (see "Define Dataset Name Groups" ). Each group represents a set of dataset name masks or fully qualified dataset names.
Note: The dataset group names that you enter must have been properly defined by using the Define Dataset Name Groups option before they can be accepted on this panel.
This function allows you to prioritize DFHSM recall and recover requests based on where the request was generated: Batch or Online; where the data resides: DASD or Tape. You may limit the requests that are prioritized by time of day/day of week, job name/mask, dataset name/mask or user ID/mask. A request which does not meet any of these selection criteria will receive the specified default priority.
In order to use this extended function, you must activate DFHSM's ARCRPEXT exit processing in the appropriate ARCCMDxx parmlib member.
Figure 56. Prioritize Recall/Recover Requests Menu
|
The primary menu for this function contains three options. Option 1: 'Prioritize System Level Controls' must always be filled out, as this is where the function is turned on or off. Options 2 and 3 are filled in as needed.
Each of these paths is presented in the following sections:
Prioritize System Level Controls:
Figure 57. Priority System Level Controls
|
To turn priority controls on or off enter YES or NO in the 'Priority Controls Active' field.
To specify a priority setting for operator or HSM internally generated requests, enter the value in the second field.
Figure 58. Recall Selection Lists
|
There are 22 selection lists or groups available. For each list you may specify time of day by day of week, job names/masks, dataset names/masks or user ids/masks. Each list has its own priority setting.
Field entry is as follows:
Enter YES to enable RECALL priority processing; NO to disable RECALL processing.
Enter a value from 1 to 100 to specify the priority that requests which do not match one of the 22 selection lists will receive.
For the four selection types, enter the weight which is to be given to each type.
Each active selection list is checked for a matching entry. The weight parameters are added to each matching entry and the list with the highest value will be used to determine the priority given to the request.
The scrollable portion of the panel contains the 22 selection lists. Four line commands are used to access the selection types.
Field entry is as follows:
Two line commands are available: S to update the selection entries or D to delete the selection entries.
No line command is needed to update any other field, simply overtype the field and press enter for the selection list to be updated.
Enter a Y to activate this selection list, or a N to deactivate it. Deactivating the list allows the selection type entries to remain even though they are not used. This allows you to turn on the selection list at a later time without having to respecify the different type entries.
Enter a value here from 1 to 99.
The batch, online, DASD and tape parameters control the actual priority value assigned to a request. OS/EM determines if the request is from a batch job or an online user, and whether the dataset to be recalled/recovered is currently stored on tape or DASD. It then calculates the priority to be assigned by adding the stated values together.
As an example, if the following values have been specified to OS/EM:
BATCH: 30 TAPE: 40 ONLINE: 40 DASD: 45
If a request is received from a batch job, and HSM has the dataset stored on DASD, the priority assigned to the request would be:
30 + 45 = 75%
While a request from an online user for a dataset which is stored on tape would be:
40 + 40 = 80%
Enter a value from 1 to 99.
Enter a value from 1 to 99.
Enter a value from 1 to 99.
The final four fields on this panel will display either INC or EXC depending if you have entered any selection type items.
The selection group panel is a scrollable list of all the selectors needed to determine the jobs/users which should have the appropriate priority.
Figure 59. Selection Group Entry Panel
|
The selector types for this function are:
MONDAY - SUNDAY | Days of the week are used to allow entry of time values. In the above example the selection group is only applicable on Tuesday between 8AM and 4PM or Friday between 5AM and 9AM. Only one time range per day is permitted. Be sure to enter the time in 24 hour format separating the beginning and ending times with a colon (:). |
JOBNAME | Enter complete jobnames or jobname masks. You may enter as many names or masks as will fit on the line (separated by spaces). If more names/masks are needed, insert another line and use the same selector type (JOBNAME). |
DSNAME | Enter full dataset names or dataset name masks. You may enter as many names or masks as will fit on the line (separated by spaces). If more names/masks are needed, insert another line and use the same selector type (DSNAME). |
USERID | Enter user IDs or user ID masks separated by spaces. You may enter as many IDs/masks as will fit on the line. If more IDs/masks are needed simply insert another blank line and use the same selector type (USERID). |
Figure 60. Recover Selection Lists
|
There are 22 selection lists or groups available. For each list you may specify time of day by day of week, job names/masks, dataset names/masks or user ids/masks. Each list has its own priority setting.
Field entry is as follows:
Enter YES to enable RECOVER priority processing; NO to disable RECOVER processing.
Enter a value from 1 to 100 to specify the priority that requests which do not match one of the 22 selection lists will receive.
For the four selection types, enter the weight which is to be given to each type.
Each active selection list is checked for a matching entry. The weight parameters are added to each matching entry and the list with the highest value will be used to determine the priority given to the request.
The scrollable portion of the panel contains the 22 selection lists. Four line commands are used to access the selection types.
Field entry is as follows:
Two line commands are available: S to update the selection entries or D to delete the selection entries.
No line command is needed to update any other field, simply overtype the field and press enter for the selection list to be updated.
Enter a Y to activate this selection list, or a N to deactivate it. Deactivating the list allows the selection type entries to remain even though they are not used. This allows you to turn on the selection list at a later time without having to respecify the different type entries.
Enter a value here from 1 to 99.
The batch, online, DASD and tape parameters control the actual priority value assigned to a request. OS/EM determines if the request is from a batch job or an online user, and whether the dataset to be recalled/recovered is currently stored on tape or DASD. It then calculates the priority to be assigned by adding the stated values together.
As an example, if the following values have been specified to OS/EM:
BATCH: 30 TAPE: 40 ONLINE: 40 DASD: 45
If a request is received from a batch job, and HSM has the dataset stored on DASD, the priority assigned to the request would be:
30 + 45 = 75%
While a request from an online user for a dataset which is stored on tape would be:
40 + 40 = 80%
Enter a value from 1 to 99.
Enter a value from 1 to 99.
Enter a value from 1 to 99.
The final four fields on this panel will display either INC or EXC depending if you have entered any selection type items.
The selection group panel is a scrollable list of all the selectors needed to determine the jobs/users which should have the appropriate priority.
Figure 61. Selection Group Entry Panel
|
The selector types for this function are:
MONDAY - SUNDAY | Days of the week are used to allow entry of time values. In the above example the selection group is only applicable on Tuesday between 8AM and 4PM or Friday between 5AM and 9AM. Only one time range per day is permitted. Be sure to enter the time in 24 hour format separating the beginning and ending times with a colon (:). |
JOBNAME | Enter complete jobnames or jobname masks. You may enter as many names or masks as will fit on the line (separated by spaces). If more names/masks are needed, insert another line and use the same selector type (JOBNAME). |
DSNAME | Enter full dataset names or dataset name masks. You may enter as many names or masks as will fit on the line (separated by spaces). If more names/masks are needed, insert another line and use the same selector type (DSNAME). |
USERID | Enter user IDs or user ID masks separated by spaces. You may enter as many IDs/masks as will fit on the line. If more IDs/masks are needed simply insert another blank line and use the same selector type (USERID). |
The Quick Delete function allows OS/EM to delete files which are migrated by issuing an HDEL command instead of first recalling the files.
The requirements for this option are that the program name is IEFBR14 and the retention status of any files coded is DELETE.
Figure 62. Entry panel for Quick Delete Control
|
Enter YES to enable the Quick Delete function or enter NO to disable it.
While everyone acknowledges that files should be blocked for the optimum DASD efficiency, the task is rarely done. Sequential files can be automatically reblocked whenever they are recalled or recovered to DASD by DFHSM. The HSM Optimizer supports FULL through EIGHTH track reblocking, plus SYSTEM reblocking if your installation has level 3.+ of DFP installed.
As usual, the first field on the reblock panel (Figure 63) allows you to ENABLE or DISABLE this function. Select the reblocking factor of choice by entering an S in the CMD field and typing YES in the ENABLED field. Figure 63.
Reblocking is advantageous when migrating to new, higher capacity DASD devices; and to ensure that DASD utilization is optimal.
Note: If either your programs or JCL contain explicit block sizes, this function will cause job failure since the internal description of the file and your external description of it will not match. You should specify block sizes in the JCL, ONLY when first creating the file.
Reblock Control requires that the following DFHSM SETSYS option be present in DFHSM ARCCMDxx member.
SETSYS CONVERSION(REBLOCKTOANY)
Note: Refer to the DFHSM Systems Programmer's Guide for further information and discussion.
Figure 63. Reblock Control Menu
|
Field entry is as follows:
Each reblocking factor may be Enabled or Disabled. Each reblocking factor may specify a minimum dataset size which a dataset must equal or exceed before it is eligible for reblocking. Finally, you may create an EXCLUDE list of dataset name groups. Datasets resolved to this list will not be reblocked.
Entering YES in this field will enable the OS/EM's extended processing.
Entering NO will disable this function.
SYSTEM
System reblocking is available only on systems that have DFP 3.0 or higher.
FULL
Full track blocking depends on the target device. For those devices with a track size larger than 32760, such as the 3380, the resulting blocksize used will actually be HALF-track blocking. For devices with a track size smaller than 32760, the actual track size will be used.
The actual block size used is determined by the device track size, whether the file contains fixed or variable length records, and DFHSM. The block size will not be adjusted by DFHSM if the file contains variable length records. The block size will become the maximum blocksize for the file.
If the file contains fixed length records, DFHSM will adjust the block size downward until an even number of logical records will fit.
Entering YES in this field will enable OS/EM's extended processing.
Entering NO will disable this function.
If you wish to prevent small datasets from being reblocked, enter a minimum dataset size (in K bytes) to be reblocked. Blank this field to allow datasets of any size to be reblocked.
If you wish to exclude particular datasets from being reblocked by DFHSM, enter G in the CMD field. You will be presented with a pop-up window where you may enter the Dataset Name Groups which will be exempt from reblocking.
Figure 64. Reblock Control - Add or Delete
|
Use the "POPUP" panel to add or delete the DSN Groups.
The area in which the group names are entered is a scrollable area. Normal ISPF commands for scrolling are in effect.
Group names are up to eight characters in length. They are created by using the Define Dataset Name Groups function (see "Define Dataset Name Groups" ). Each group represents a set of dataset name masks or fully qualified dataset names.
Note: The dataset group names that you enter must have been properly defined by using the Define Dataset Name Groups option before they can be accepted on this panel.
This option determines whether DFHSM RECALL/RECOVER will proceed according to the DASD allocation rules established with OS/EM's QuickPool option.
If the QuickPool option is in effect, you must enable this option. DFHSM RECALL/RECOVERY might fail if this option is not in effect; especially if you change the allocation rules after DFHSM has migrated or backed up the dataset.
Directed Recall requires that the following DFHSM SETSYS option be present in DFHSM ARCCMDxx member.
SETSYS RECALL(PRIVATE(UNLIKE))
Note: If the dataset is a DFSMS dataset this option will not be invoked. If QUICKPOOL allocation is in effect, you must specify DIRECTRECALL. DFHSM RECALL/RECOVERY will most likely fail if this option is not enabled.
Figure 65. Recall/Recover Selection Control
|
Field entry is as follows:
Entering YES in this field will enable the OS/EM's extended processing.
Entering NO will disable this function.
HSM Report System provides reports detailing the performance of the DFHSM component in both a Non-SMS and DFSMS environment utilizing the DFHSM SMF Function Statistic Records (FSR), Volume Statistic Records (VSR), and the Daily Statistic Records (DSR). A database of the DFHSM SMF records is maintained to provide both daily and historical reporting.
Report Number | Report Name |
REPORT-01 | MIGRATION DETAIL (PRIMARY - ML1) |
REPORT-02 | MIGRATION DELAY SUMMARY (PRIMARY - ML1) |
REPORT-03 | MIGRATION AGE SUMMARY (PRIMARY - ML1) |
REPORT-04 | MIGRATION DETAIL (ML1 - ML2) |
REPORT-05 | MIGRATION DELAY SUMMARY (ML1 - ML2) |
REPORT-06 | MIGRATION AGE SUMMARY (ML1 - ML2) |
REPORT-07 | MIGRATION DETAIL (PRIMARY - ML2) |
REPORT-08 | MIGRATION DELAY SUMMARY (PRIMARY - ML2) |
REPORT-09 | MIGRATION AGE SUMMARY (PRIMARY - ML2) |
REPORT-10 | RECALL DETAIL (ML1 - PRIMARY) |
REPORT-11 | RECALL DELAY SUMMARY (ML1 - PRIMARY) |
REPORT-12 | RECALL AGE SUMMARY (ML1 - PRIMARY) |
REPORT-13 | RECALL DETAIL (ML2 - PRIMARY) |
REPORT-14 | RECALL DELAY SUMMARY (ML2 - PRIMARY) |
REPORT-15 | RECALL AGE SUMMARY (ML2 - PRIMARY) |
REPORT-16 | DFHSM DASD VOLUME SUMMARY |
REPORT-17 | PRIMARY DATASET ACTIVITY REPORT |
REPORT-18 | DFHSM ERROR DETAIL REPORT |
REPORT-19 | DFHSM ERROR SUMMARY REPORT |
REPORT-20 | ACTIVITY SUMMARY |
REPORT-21 | MIGRATED DATASET SUMMARY |
REPORT-22 | DATASET BACKUP SUMMARY |
REPORT-23 | PRIMARY VOLUMES |
REPORT-24 | PRIMARY VOLUME DETAIL |
REPORT-25 | PRIMARY VOLUME DATE REFERENCE DETAIL |
REPORT-26 | MIGRATED DATASET DETAIL (MCDS Sorted by DSN) |
REPORT-27 | BACKED UP DATASET DETAIL (BCDS Sorted by DSN With XREF) |
REPORT-28 | MIGRATED DATASET DETAIL (MCDS Sorted by Date) |
REPORT-29 | BACKED UP DATASET DETAIL (BCDS Sorted by Date with XREF) |
REPORT-30 | BACKED UP DATASET DETAIL (BCDS Sorted by DSN No XREF) |
REPORT-31 | BACKED UP DATASET DETAIL (BCDS Sorted by Date No XREF) |
This report presents a list of all datasets migrated from primary storage to ML1 storage for the requested reporting period.
If you find very large datasets going to ML1, especially if they have low compression ratios, you might want to consider moving these datasets directly to ML2 storage (assuming this is tape in your installation). Such datasets impact ML1 utilization and might result in a ML1 to ML2 migration which will impact recall times for all datasets migrated.
Figure 66. REPORT-01 MIGRATION DETAIL (Primary - ML1)
HSM OPTIMIZER 5.6 MIGRATION DETAIL (PRIMARY - ML1) PAGE 3 REPORT: 01 FORMAT: 01 02/10/02 - 02/12/02 REPORT TIME: 12:14 DATE: 2/13/02
|
The report contains the following data:
DFSMS Management Class
This report presents a summary, by dataset size, of delays in migrating datasets. Delays usually occur when all defined DFHSM migration tasks are currently busy. Unless the average delay seems overly long, do not be too concerned with the values reported. If the average delays do seem overly long, you might want to consider allowing more concurrent DFHSM migration tasks.
Figure 67. REPORT-02 MIGRATION DELAY SUMMARY (Primary - ML1)
HSM OPTIMIZER 5.6 MIGRATION DELAY SUMMARY (PRIMARY - ML1) PAGE 2 REPORT: 02 FORMAT: 02 02/01/02 - 03/31/02 REPORT TIME: 8:36 DATE: 3/18/02
|
The migration delay summary report contains the following data:
This report presents a summary, by dataset age, of all datasets migrated during the reporting period.
This report will help you pinpoint problems with your current aging strategy, if any. For example, a report may show 45 requests for datasets that have not been referenced within 50 days. It may also show 2,487 requests for a dataset age of 2. If the total bytes read for the age 50 datasets is fairly large, when compared to the total bytes read for the age 2 datasets, you may be holding such datasets on primary storage for too long a period.
Figure 68. REPORT-03 MIGRATION AGE SUMMARY (Primary - ML1)
HSM OPTIMIZER 5.6 MIGRATION AGE SUMMARY (PRIMARY - ML1) PAGE 3 REPORT: 03 FORMAT: 03 02/01/02 - 03/31/02 REPORT TIME: 8:36 DATE: 3/18/02
|
The Migration Age Summary Report contains the following data:
This report presents a listing of all datasets migrated from DFHSM ML1 storage to ML2 storage (this is usually tape in most installations).
Figure 69. REPORT-04 MIGRATION DETAIL (ML1 - ML2)
HSM OPTIMIZER 5.6 MIGRATION DETAIL (ML1 - ML2) PAGE 4 REPORT: 04 FORMAT: 01 01/16/02 - 02/13/02 REPORT TIME: 13:00 DATE: 2/14/02
|
The report contains the following data:
This report presents a summary, by dataset size, of all datasets migrated from DFHSM ML1 storage to ML2 storage and the wait time associated with the migration requests.
If most of your migration activity seems to be concentrated in datasets of small to medium size, you might want to investigate the Migration Detail report, paying particular attention to datasets of small size. Migrating small datasets really doesn't buy that much especially if they must be recalled in a short period of time.
Figure 70. REPORT-05 MIGRATION DELAY SUMMARY (ML1 - ML2)
HSM OPTIMIZER 5.6 MIGRATION DELAY SUMMARY (ML1 - ML2) PAGE 4 REPORT: 05 FORMAT: 02 02/01/02 - 03/31/02 REPORT TIME: 8:36 DATE: 3/18/02
|
The Migration Delay Summary Report contains the following data:
This report presents a summary, by dataset age, of all datasets migrated from DFHSM ML1 storage to ML2 storage.
This report will help you pinpoint problems with your current aging strategy, if any. For example, a report may show 45 requests for datasets that have not been referenced within 50 days. It may also show 2,487 requests for a dataset age of 2. If the total bytes read for the age 50 datasets is fairly large, when compared to the total bytes read for the age 2 datasets, you may be holding such datasets on primary storage for too long a period.
If the bulk of your requests are concentrated at low aging factors, either your current DFHSM aging is very aggressive or your DASD is limited and you must continually migrate. You should also take a close look at your ARCCMDxx member or your ACS parameters and determine whether an excessive number of datasets are being excluded from the migration process. Such datasets limit the amount of primary storage under DFHSM management, and probably cause excessive migration for the remaining datasets.
Figure 71. REPORT-06 MIGRATION AGE SUMMARY (ML1 - ML2)
HSM OPTIMIZER 5.6 MIGRATION AGE SUMMARY (ML1 - ML2) PAGE 5 REPORT: 06 FORMAT: 03 02/01/02 - 03/31/02 REPORT TIME: 8:36 DATE: 3/18/02
|
The Migration Age Summary Report contains the following data:
This report presents a listing of all datasets migrated from DFHSM storage directly to ML2 storage (this is usually tape in most installations). It presents the same information shown for migration from primary to ML1 storage. Such migration is usually explicitly requested since the normal migration path is from primary to ML1 to ML2. Datasets on this report would be likely candidates for the Optimizer's Direct-to-ML2 support.
Figure 72. REPORT-07 MIGRATION DETAIL (PRIMARY - ML2)
HSM OPTIMIZER 5.6 MIGRATION DETAIL (PRIMARY - ML2) PAGE 30 REPORT: 07 FORMAT: 01 02/10/02 - 02/12/02 REPORT TIME: 12:14 DATE: 2/13/02
|
The report contains the following data:
This report presents a summary, by dataset size, of all datasets migrated from DFHSM primary storage to ML2 storage.
Figure 73. REPORT-08 MIGRATION DELAY SUMMARY (PRIMARY - ML2)
HSM OPTIMIZER 5.6 MIGRATION DELAY SUMMARY (PRIMARY - ML2) PAGE 6 REPORT: 08 FORMAT: 02 02/01/02 - 03/31/02 REPORT TIME: 8:36 DATE: 3/18/02
|
The migration delay summary report contains the following data:
This report presents a summary, by dataset age, of all datasets migrated from DFHSM primary storage to ML2 storage.
This report will help you pinpoint problems with your current aging strategy, if any. For example, a report may show 45 requests for datasets that have not been referenced within 50 days. It may also show 2,487 requests for a dataset age of 2. If the total bytes read for the age 50 datasets is fairly large, when compared to the total bytes read for the age 2 datasets, you may be holding such datasets on primary storage for too long a period.
If the bulk of your requests are concentrated at low aging factors, either your current DFHSM aging is very aggressive or your DASD is limited and you must continually migrate. You should also take a close look at your ARCCMDxx member or your ACS parameters and determine whether an excessive number of datasets are being excluded from the migration process. Such datasets limit the amount of primary storage under DFHSM management, and probably cause excessive migration for the remaining datasets.
Figure 74. REPORT-09 MIGRATION AGE SUMMARY (PRIMARY - ML2)
HSM OPTIMIZER 5.6 MIGRATION AGE SUMMARY (PRIMARY - ML2) PAGE 7 REPORT: 09 FORMAT: 03 02/01/02 - 03/31/02 REPORT TIME: 8:36 DATE: 3/18/02
|
The Migration Age Summary Report contains the following data:
This report is a listing of all datasets recalled from DFHSM ML1 storage to primary storage for the requested period. It presents the same information shown for migration from primary to ML1 storage.
Again, look for an excessive number of recalls for datasets that were migrated for a short period of time. Datasets that remained migrated for a relatively long period of time might be likely candidates for migration directly to ML2 storage. This will free space on ML1 storage, allowing more frequently referenced datasets to remain on ML1 storage and give better service times for dataset recall.
Figure 75. REPORT-10 RECALL DETAIL (ML1 - PRIMARY)
HSM OPTIMIZER 5.6 RECALL DETAIL (ML1 - PRIMARY) PAGE 33 REPORT: 10 FORMAT: 01 02/10/02 - 02/12/02 REPORT TIME: 12:14 DATE: 2/13/02
|
The Recall Detail Report contains the following data:
This report presents a summary by dataset size, of delays in recalling datasets. Delays usually occur when all defined DFHSM recall tasks are currently busy. Unless the average delay seems overly long, do not be too concerned with the values reported. If the average delays do seem overly long, you might want to consider allowing more concurrent DFHSM recall tasks.
Figure 76. REPORT-11 RECALL DELAY SUMMARY (ML1 - PRIMARY)
HSM OPTIMIZER 5.6 RECALL DELAY SUMMARY (ML1 - PRIMARY) PAGE 8 REPORT: 11 FORMAT: 04 02/01/02 - 03/31/02 REPORT TIME: 8:36 DATE: 3/18/02
|
The recall Delay Summary Report contains the following data:
This report presents a summary, by dataset age, of all datasets recalled from DFHSM ML1 storage to primary storage.
Figure 77. REPORT-12 RECALL AGE SUMMARY (ML1 - PRIMARY)
HSM OPTIMIZER 5.6 RECALL AGE SUMMARY (ML1 - PRIMARY) PAGE 9 REPORT: 12 FORMAT: 03 02/01/02 - 03/31/02 REPORT TIME: 8:36 DATE: 3/18/02
|
The following data is presented:
This report is a listing of all datasets recalled from DFHSM ML2 storage to primary storage for the requested period.
Again, look for an excessive number of recalls for datasets that were migrated for a short period of time. Datasets that remained migrated for a relatively long period of time might be likely candidates for migration directly to ML2 storage. This will free space on ML1 storage, allowing more frequently referenced datasets to remain on ML1 storage and give better service times for dataset recall.
Figure 78. REPORT-13 RECALL DETAIL (ML2 - PRIMARY)
HSM OPTIMIZER 5.6 RECALL DETAIL (ML2 - PRIMARY) PAGE 37 REPORT: 13 FORMAT: 01 02/10/02 - 02/12/02 REPORT TIME: 12:14 DATE: 2/13/02
|
The Recall Detail Report contains the following data:
This report presents a summary by dataset size, of delays in recalling datasets. Delays usually occur when waiting for a tape mount to occur.
Figure 79. REPORT-14 RECALL DELAY SUMMARY (ML2 - PRIMARY)
HSM OPTIMIZER 5.6 RECALL DELAY SUMMARY (ML2 - PRIMARY) PAGE 10 REPORT: 14 FORMAT: 04 02/01/02 - 03/31/02 REPORT TIME: 8:36 DATE: 3/18/02
|
The Recall Delay Summary Report contains the following data:
This report presents a summary, by dataset age, of all datasets recalled from DFHSM ML2 storage to primary storage.
Figure 80. REPORT-15 RECALL AGE SUMMARY (ML2 - PRIMARY)
HSM OPTIMIZER 5.6 RECALL AGE SUMMARY (ML2 - PRIMARY) PAGE 11 REPORT: 15 FORMAT: 03 02/01/02 - 03/31/02 REPORT TIME: 8:36 DATE: 3/18/02
|
The following data is presented:
The DFHSM Volume Report shows the activity of all the volumes under DFHSM control.
Figure 81. REPORT-16 DFHSM DASD VOLUME SUMMARY
HSM OPTIMIZER 5.6 DFHSM DASD VOLUME SUMMARY PAGE 2 REPORT: 16 FORMAT: 08 02/01/02 - 03/31/02 REPORT TIME: 16:08 DATE: 3/18/02
|
The report contains the following data:
The Primary Dataset Activity Report list the datasets that are thrashing. By specifying a Data Set movement index, only datasets that exceed that value will be included on the report. The index is calculated as:
DMIndex = total migrations + total recalls / 30
Note: Total Migrations and Total Recalls are calculated from thirty days preceding the Report Ending Date.
The minimum retention date for the HSM Optimizer Report Database should be two months, but the recommended retention date is three months, so that the dataset movement index will include enough data to be of value. The report is sorted in descending order by dataset movement index.
Figure 82. REPORT-17 PRIMARY DATASET ACTIVITY REPORT
HSM OPTIMIZER 5.6 PRIMARY DATASET ACTIVITY REPORT PAGE 43 REPORT: 17 FORMAT: 06 02/10/02 - 02/12/02 REPORT TIME: 12:14 DATE: 2/13/02
|
The following data is presented:
The DFHSM Error Detail Report lists all the datasets during the reporting period that failed DFHSM processing for one reason or another. The datasets are listed by DFHSM error and reason code.
The type of errors which require investigation include unsupported dataset errors: the dataset may have been created and never opened leaving an unknown DSORG, catalog locate errors (probably a dataset that has been uncataloged but not deleted), or any other error which you do not believe should occur. It is probably worth your time to investigate any such error the first time you produce these reports.
Figure 83. REPORT-18 DFHSM ERROR DETAIL REPORT
HSM OPTIMIZER 5.6 DFHSM ERROR DETAIL REPORT PAGE 58 REPORT: 18 FORMAT: 05 02/10/02 - 02/12/02 REPORT TIME: 12:14 DATE: 2/13/02
|
The report contains the following information:
The DFHSM Error Summary Report lists the total number of errors and the number of datasets that DFHSM could not process for one reason or another. The report is summarized by DFHSM error and reason code.
The types of errors which require investigation include unsupported dataset errors: the dataset may have been created and never opened leaving an unknown DSORG, catalog locate errors (probably a dataset that has been uncataloged but not deleted), or any other error which you do not believe should occur. It is probably worth your time to investigate any such error the first time you produce these reports.
Figure 84. REPORT-19 DFHSM ERROR SUMMARY REPORT
HSM OPTIMIZER 5.6 DFHSM ERROR SUMMARY REPORT PAGE 73 REPORT: 19 FORMAT: 07 02/10/02 - 02/12/02 REPORT TIME: 12:14 DATE: 2/13/02
|
The report contains the following information:
The Activity Summary report shows the summary activity for DFHSM processing for the last 24 hours and for the report period selected by the Beginning Date for Reports and the Ending Date for Reports.
Figure 85. REPORT-20 ACTIVITY SUMMARY
HSM OPTIMIZER 5.6 ACTIVITY SUMMARY PAGE 2 REPORT: 20 FORMAT: 09 02/01/02 - 03/31/02 REPORT TIME: 16:10 DATE: 3/18/02
|
The report contains the following data:
This report lists all migration activity by days aged. It shows summary information for all datasets migrated to ML1 storage, and datasets migrated to ML2 storage.
Figure 86. REPORT-21 MIGRATED DATASET SUMMARY
HSM OPTIMIZER 5.6 MIGRATED DATASET SUMMARY PAGE 2 REPORT: 21 FORMAT: 10 02/11/02 - 02/13/02 REPORT TIME: 9:02 DATE: 2/14/02
|
The report contains:
This report presents a summary, by dataset age, of DFHSM backup activity for the reporting period. Since you may specify that multiple versions of a backed up dataset be retained, summaries are presented for versions 1 through 4, and for retained versions of 5 or greater.
If you have many datasets that have been retained for more than a year or two, you might want to investigate how many of these datasets are still valid. Deleting a dataset does not automatically delete a DFHSM backup copy of the dataset.
Figure 87. REPORT-22 DATASET BACKUP SUMMARY
HSM OPTIMIZER 5.6 DATASET BACKUP SUMMARY PAGE 3 REPORT: 22 FORMAT: 11 02/11/02 - 02/13/02 REPORT TIME: 9:02 DATE: 2/14/02
|
The report contains the following data:
This report presents a list of all DFHSM primary volumes. The list is presented in serial number sequence.
Figure 88. REPORT-23 PRIMARY VOLUMES
HSM OPTIMIZER 5.6 PRIMARY VOLUMES PAGE 19 REPORT: 23 FORMAT: 12 02/11/02 - 02/13/02 REPORT TIME: 9:02 DATE: 2/14/02
|
The data presented is:
If a volume contains DSORGs indicated as ???, you should investigate the volume for datasets that have been allocated but never opened.
You might also consider using the DFHSM compress option for PDS datasets, if such datasets indicate a high percentage of free space. Many PDS datasets are allocated with a large primary allocation so that they will not run out of room. If you use the DFHSM compress option, you can use a primary allocation that will hold the normal contents of the dataset, and specify a secondary allocation to handle expansion. During DFHSM migration volume processing, these datasets will be migrated, then recalled with a new allocation contain the contents within a primary allocation. This will free any unused space within the dataset. The secondary allocation will still handle expansion (to keep from fragmenting the file, consider using a secondary allocation at least as large as the primary allocation).
If you use the compress option, be sure that you set the number of extents appropriately. MVS systems consider 5 extents to be a primary allocation. Therefore, set the number of extents to at least 6.
This report presents a list of all DFHSM primary volumes. The list is presented in volume serial number sequence.
Figure 89. REPORT-24 PRIMARY VOLUME DETAIL
HSM OPTIMIZER 5.6 PRIMARY VOLUME DETAIL PAGE 313 REPORT: 24 FORMAT: 13 02/11/02 - 02/13/02 REPORT TIME: 9:02 DATE: 2/14/02
|
The data presented is:
Possible values:
RF | RACF discrete profile |
PR | OS password to read |
PW | OS password to write |
This report presents a list of all datasets on primary volumes in reference date order. The date starts with the oldest reference date to most current. It presents the same data as the Primary Volume Detail report.
If this report indicates a fair number of large datasets on primary datasets with fairly old reference dates, there might be a problem with your ARCCMDxx specifications. Datasets not frequently referenced should not be occupying primary volume space. Such space could probably be put to better use by your installation.
The report format will be the same as the Primary Volume Detail Report.
Figure 90. REPORT-25 PRIMARY VOLUME DATE REFERENCE DETAIL
HSM OPTIMIZER 5.6 PRIMARY VOLUME DATE REFERENCE DETAIL PAGE 273 REPORT: 25 FORMAT: 13 02/11/02 - 02/13/02 REPORT TIME: 9:39 DATE: 2/14/02
|
The data presented is:
Possible values:
RF | RACF discrete profile |
PR | OS password to read |
PW | OS password to write |
This report presents a list of all datasets migrated in dataset name order. The information presented is contained in the DFHSM Migration Control Data Set.
Figure 91. REPORT-26 MIGRATED DATASET DETAIL (MCDS Sorted by DSN)
HSM OPTIMIZER 5.6 MIGRATED DATASET DETAIL - DSN SEQUENCE PAGE 466 REPORT: 26 FORMAT: 14 02/11/02 - 02/13/02 REPORT TIME: 9:39 DATE: 2/14/02
|
The dataset presented is:
Disk volume where the dataset resided before migration.
This report presents a list of datasets which have been backed up in dataset name order with a cross-reference showing the current location of the dataset.
Figure 92. REPORT-27 BACKED UP DATASET DETAIL (BCDS Sorted by DSN With XREF)
HSM OPTIMIZER 5.6 DATASET BACKUP DETAIL - DSN SEQUENCE PAGE 186 REPORT: 27 FORMAT: 15 02/11/02 - 02/13/02 REPORT TIME: 9:39 DATE: 2/14/02
|
The data presented is:
Disk volume where the dataset resided before it was either backed up or migrated.
This is the cross-reference information. This information is collected by reading the catalog for each dataset. The codes have the following meanings:
D | Dataset resides on disk. |
M | Dataset has been migrated. |
X | Dataset has been deleted. |
This report presents a list of all datasets which have been migrated. The information presented is from the Migration Control Data Set sorted by date migrated.
Figure 93. REPORT-28 MIGRATED DATASET DETAIL (MCDS Sorted by Date)
HSM OPTIMIZER 5.6 MIGRATED DATASET DETAIL - DATE SEQUENCE PAGE 383 REPORT: 28 FORMAT: 14 02/11/02 - 02/13/02 REPORT TIME: 9:39 DATE: 2/14/02
|
The data presented is:
Volume where dataset resided before migration.
This report presents a list of all datasets backed up in backup date order. The information presented is from the Backup Control Data Set.
Figure 94. REPORT-29 BACKED UP DATASET DETAIL (BCDS Sorted by Date With XREF)
HSM OPTIMIZER 5.6 DATASET BACKUP DETAIL - DATE SEQUENCE PAGE 98 REPORT: 29 FORMAT: 15 02/11/02 - 02/13/02 REPORT TIME: 9:39 DATE: 2/14/02
|
The data presented is:
Disk volume where the dataset resided before it was either backed up or migrated.
This is the cross-reference information. This information is collected by reading the catalog for each dataset. The codes have the following meanings:
D | Dataset resides on disk. |
M | Dataset has been migrated. |
X | Dataset has been deleted. |
This report presents a list of all datasets backed up in dataset name order. The information presented is from the Backup Control Data Set.
Figure 95. REPORT-30 BACKED UP DATASET DETAIL (BCDS Sorted by DSN No XREF)
HSM OPTIMIZER 5.6 BCDS DETAIL - DSN SEQUENCE PAGE 813 REPORT: 30 FORMAT: 16 02/11/02 - 02/13/02 REPORT TIME: 9:39 DATE: 2/14/02
|
The data presented is:
Note: This is the DFHSM generation number, not a generation data group level number.
Disk volume where the dataset resided before it was either backed up or migrated.
Figure 96. REPORT-31 Backed Up Dataset Detail By Date
HSM OPTIMIZER 5.6 BCDS DETAIL - DATE SEQUENCE PAGE 528 REPORT: 31 FORMAT: 16 02/11/02 - 02/14/02 REPORT TIME: 9:39 DATE: 2/14/02
|
The data presented is:
Note: This is the DFHSM generation number, not a generation data group level number.
Disk volume where the dataset resided before it was either backed up or migrated.
DFHSM generates many statistics which are kept in SMF records and are not generally reported. Since tuning DFHSM can be critical to its successful operation, we have provided a series of reports which use these statistics, and will significantly reduce the tuning effort required for DFSMS, DFHSM, and the HSM Optimizer.
These reports will assist in setting and maintaining your ACS parameters in a DFSMS environment as well.
Reports are generated by selecting 4 HSM Reports from the OS/EM Extended Functions menu. When you do so, a panel is displayed which presents you with three (3) options.
Figure 97. HSM Optimizer Reports Menu
|
Each of these paths is presented in the following sections:
Produce HSM Reports (see "Produce HSM Reports") | |
Collect SMF Data (see "Collect SMF Data for HSM database") This option is used to load HSM data from a file containing the HSM SMF records into the HSM Report database. The database must already have been created by Option 3. | |
Define/Allocate HSM Optimizer Reports Files (see "Define/Allocate HSM Optimizer Files") This option is used to define a new HSM Optimizer Report database (either VSAM or QSAM); define the names of the HSM Migration Control Data Set (MCDS), the Backup Control Data Set (BCDS), and SMF record type being produced by DFHSM that is required for building the HSM database. |
Each of these options results in a batch job being submitted which does the actual work. A panel allowing you to supply an appropriate JOB statement is presented just before the batch job is submitted.
Note: The JOBCARD information is stored in the users profile.
Figure 98. HSM Optimizer Reports - JCL
|
Field entry is as follows:
Enter or update the information for the JOB statement to be used for the batch job which will be submitted.
Enter or update the SYSPRINT specification that will be used in the batch job which will be submitted.
Report 00 (which contains control card images), report processing messages, sort messages and other utility messages are directed to SYSPRINT. Reports 1 to 32 are written to dynamically allocated DD statements where the ddname is the name of the report.
Note: The print class used for reports 1 to 32 is taken from the SYSPRINT specification.
Once the Define/Allocate (see "Define/Allocate HSM Optimizer Files") and the Collect SMF Data (see "Collect SMF Data for HSM database") steps have been completed, you may request reports. Thirty-one (31) reports are produced in sixteen (16) basic formats.
Reports are requested by selecting 4, HSM Reports, from the OS/EM Extended Functions panel. The HSM Optimizer Reports panel (see Figure 99) lets you set the beginning and ending reporting dates, the beginning and ending dates for the MCDS and BCDS reports as well as the data movement index needed for report-17.
The entries you make determine the reports that are generated.
Figure 99. HSM Optimizer Reports - Select Reports
|
Field entry is as follows:
The Report Beginning/Ending dates can be specified as MM/DD/YY, or optionally by using HSM Optimizer Reports' Dynamic Date specification the user can allow the HSM Optimizer Reports to select the records included on the report based on the current run date and a 'DELTA' specified from the current date.
If the beginning Date for Reports is specified as '*M' and nothing is entered in the Ending Date for Reports then only records that match the current processing month will be included in the reports. Specifying '*M-1' in the Beginning Date for Reports includes all records from the month prior to the current processing date. Valid values are '*M' thru '*M-999' for both Beginning and Ending dates.
If the Beginning Date for Reports is specified as '*D-1' the reports will include all records from the Day prior to the current processing date. Valid values are '*D' thru '*D-999' for both Beginning and Ending Report Dates.
If the Report Beginning/Ending dates are specified as MM/DD/YY, the months, days, and years are coded as two digit numbers.
If neither a beginning nor ending date are specified, the report will contain only the last seven (7) days.
By specifying a Dataset Movement Index, only datasets that exceed that value will be included in the report. See "Report-17 PRIMARY DATASET ACTIVITY REPORT".
The bottom portion of the panel is scrollable. It lists the 31 available reports in numerical sequence. This information is saved in the OS/EM ISPTLIB library and so is available to anyone selecting reports.
There are two available fields in this section of the panel, SEL and Selected. The SEL field acts as a toggle, in other words, if the report has not been selected, placing an S in the SEL column will change the Selected field to YES. If the report has already been selected, placing an S in the SEL column will change the Selected field to blanks.
Note: The RPTRPT member provided in the OS/EM SAMPLIB provides sample JCL for the create HSM Optimizer Reports function and is to be used as an example to create any batch jobs that you may require.
The second step in report generation is to collect the appropriate SMF data. Three fields are required on the panel (see Figure 100): the HSM Database Retention Date, the HSM Report Database name and the SMF Input File Name.
Figure 100. HSM Optimizer Reports Collect SMF Data
|
Field entry is as follows:
The Report Retention dates can be specified as MM/DD/YY, or optionally by using HSM Optimizer Reports' Dynamic Date specification the user can allow the HSM Optimizer Reports to retain the records included on the HSM Optimizer Reports database on the current run date and a 'DELTA' specified from the current date.
If the Retention is specified as '*M' then only records that match the current processing month will be retained in the HSM Optimizer Reports database. Specifying '*M-1' in the retention Date for HSM Optimizer Reports database will retain the current and prior months data. Valid values are '*M' thru '*M-999' for the Retention date.
If the Retention Date for is specified as '*D-1' the HSM Optimizer Reports database will retain the day prior to the current processing date. Valid values are '*D' thru '*D-999' for the retention date.
If the Report Retention date is specified as MM/DD/YY the months, days, and years are coded as two digit numbers.
Note: OS/EM recommends that the retention date be specified as at least '*M-2'. This retains the current month plus the two previous months on the HSM Optimizer Reports database. This is required for certain reports to function properly. i.e. Dataset Activity Report.
If a Retention Date is not specified, the database will contain all records.
Enter the name of the database created by "Define/Allocate HSM Optimizer Files".
The other required information is the name of the SMF file where your installation collects SMF data. This file must have a format produced by the IBM IFASMFDP program.
Note: The IFASMFDP member provided in the OS/EM SAMPLIB provides sample JCL for the dump SMF files function and is to be used as an example to create any batch jobs that you may require.
The end result of using this panel will be a batch job which will read the specified SMF file and populate the HSM Optimizer Reports database with the requested records. Each time you use the panel, more data is collected, up to the specified retention. For a QSAM database, the effective retention is the number of generations you specified when defining the HSM Optimizer Reports database. For a VSAM database, records are added and updated, as necessary, to keep the database within the desired retention period. This will mean that the database should be reorganized when a catalog listing shows an excessive number of CI and CA splits.
Note: The RPTCOLL member provided in the OS/EM SAMPLIB provides sample JCL for the collect SMF data function and is to be used as an example to create any batch jobs that you may require.
The HSM Optimizer files must be defined and allocated before any reports can be produced. The panel (see Figure 101) lets you specify these datasets.
Figure 101. HSM Optimizer Reports - Define/Allocate Files
|
Field entry is as follows:
Enter Y to submit JOB for creating a VSAM cluster or defining the Generation Dataset Base definition for a QSAM file.
Enter N to not submit a JOB.
Enter a V or Q to define a VSAM cluster or QSAM HSM Report Database.
The HSM Optimizer database will be a QSAM or VSAM dataset. If you choose QSAM, you must specify:
If you choose VSAM as the database organization, you must specify:
The default number of records for primary and secondary allocation should be adequate for most installations.
Note: You must also specify the names of the DFHSM MCDS and BCDS datasets used in your installation. Information used in some of these reports is gathered from these datasets.
Enter the DFHSM SMF record number.
Note: Refer to DFHSM ARCCMDxx member or your DFHSM Storage Administration Reference to determine the SMF record number selected for your installation.
Enter the volser of the volume where the VSAM database is to be allocated.
Enter the unit type where the HSM Report Database will be allocated.
Enter the number of generations required for QSAM database.
Enter the primary allocation specified in number of records.
Enter the secondary allocation specified in number of records.
Enter a dataset name for the HSM Optimizer Report Database
Enter the DFHSM Migration Control Data Set (MCDS) dataset name.
Enter the DFHSM Backup Control Data Set (BCDS) dataset name.
Note: If either your MCDS or BCDS datasets are split, enter YES in the UPDATE SPLIT DSNs field. This will present you with another panel to enter the dataset names.
Enter the name of a Model DSCB if you are allocating a QSAM HSM Optimizer Report database.
Once all the fields on the panel are complete to your satisfaction, ensure the SUBMIT Job for Initialization field is set to Y. The submitted job will either define the GDG Base Definition, or define a VSAM file.
Note: Refer to DFHSM ARCCMDxx member or your DFHSM Storage Administration Reference to determine the SMF record number selected for your installation.
Note: The RPTALLOC member provided in the OS/EM SAMPLIB provides sample JCL for the define HSM Optimizer Reports files function and is to be used as an example to create any batch jobs that you may require.
The ISPF File Prefix Controls allow you to specify a specific prefix for ISPF log, list, and temporary dataset allocations.
Figure 102. ISPF File Prefix Controls
|
Field entry is as follows:
Enter the 1 to 8 character prefix to use in the LOG file allocation.
Enter the 1 to 8 character prefix to use in LIST file allocations.
Enter the 1 to 8 character prefix to use in TEMPORARY file allocations.
To disable the prefix for any of the above, simply blank out the field for the file type to be disabled.
System Symbolics may also be entered. If entered, they will be resolved prior to file allocation.
The dataset name constructed will be in the form:
userid.prefix.ISPF-specific-suffix
Where:
The TSO ID of the user
The prefix entered via OS/EM
This is controlled by ISPF based on the type of file being allocated.
If this option is enabled by executing the change online it takes effect immediately for all users currently using ISPF and may cause ISPF system errors. Because of this it is suggested that the option only be enabled during an IPL when OS/EM will process the INIT members in OSV6.
OS/EM provides a version of ISPEXITS to allow it to dynamically load and delete any of the ISPF Installation Wide Exits. IBM provides a default module of this name and if your already run ISPF exits you will have your own version. You must ensure that ISPEXITS is not in a STEPLIB in any ISPF logon procedure, otherwise ISPEXITS will be loaded from the STEPLIB and not from the OS/EM load library. Any exit (and associated data area) coded in your ISPEXITS must be defined to OS/EM via the Basic Exit Functions.
In addition, if you have never activated the ISPF Installation Wide Exits, you must enable them by either coding the option on the ISPMTAIL macro or with the ISPCCONF command. See the ISPF Planning and Customizing manual section Tailoring ISPF Defaults for more information.
JCL Controls allow installations to control various JCL parameters using an OS/EM table, or utilize an External Security Manager for checking whether Users have access to a particular resource.
Figure 103. JCL Controls for JES2
|
Each of these paths is presented in the following sections:
Figure 104. Account Number Controls
|
Account Number Controls allows you to control whether a job will be allowed to execute based upon the values entered in the Job accounting field of the job card or the step execute card.
OS/EM supports up to six sub-parameter Job accounting fields (ACCT1 to ACCT6). To set accounting field control attributes, enter a non-blank character next to the desired account field. The account control edit panel will be displayed (see Figure 105).
If you want the account number controls to validate TSO users at logon time, enter YES in the 'Apply Account Number Controls to TSO Users?' field.
Figure 105. JCL Controls: ACCT1
|
Field entry is as follows:
Enter YES to activate accounting field checking.
Enter NO to deactivate accounting field checking.
If you have entered values in the scrollable portion of this panel you may allow their use, disallow their use, or security check the values.
If all values to be checked have been defined to your external security manager, you may check the security manager to see if their use is allowed.
This field is the controlling keyword for this function. If you enter allow or disallow, you must enter items in the scrollable section of this panel, as those will be the items which are to be allowed or disallowed.
If you have specified allow, then the accounting field from the job being checked will be compared to the listed items. If the value is not found, then the other values field will determine if the value is allowed. See the other values field below for determining the action to be taken if the value is not found.
If you have specified disallow, then the accounting field from the job being checked will be compared to the listed items. If the value is listed, then the account number is disallowed, and the job will receive a JCL error. If the account number is not listed, then the other values field will determine if the value is allowed. See the other values field below for determining the action to be taken.
If you have specified check and you have listed items in the scrollable portion of the panel, then the accounting field will be compared to those items. If a match is found, then the external security manager is checked to determine if the user submitting the job has access to the value listed.
If this value is not defined to your security manager, then the undefined to security field will determine the action to be taken.
If this value is not defined in the list, then the other values field determines if the value is allowed.
If you enter check and do not enter any items in the scrollable portion of the panel, the accounting field is only checked against the external security manager and the undefined to security field will determine the action to be taken.
If you have specified check as the controlling keyword, any values that have not been defined to the external security manager will either be allowed or disallowed based on your entry here.
This entry is ignored if the controlling keyword is allow or disallow.
This field specifies the action to be taken whenever a parameter is not found in the list of values displayed in the scrollable portion of the panel.
Specifying allow means that any parameter which was not found in the list of values will be allowed.
Disallow specifies that the parameter which was not found in the list will be disallowed and the job will fail with a JCL error.
Specifying check will cause the parameter not found in the list to be checked via the external security manager.
Enter either CHAR or NUMERIC to control how the account number is validated. If NUMERIC is specified, than leading zeros are dropped, where as CHAR specifies that each character must match exactly.
To list specific data, use the scrollable portion of the panel. To insert a blank line, enter I in the CMD field.
To delete a line, enter a D in the CMD field for the line to be deleted.
To update a line, enter an S in the CMD field and overtype the information to be changed.
Enter the accounting information being checked.
The command authorization is done by using classname FACILITY for RACF and CA-ACF2, and classname IBMFAC or DATASET for CA-TOPSECRET. The resource name is JCL.cmd.data, where 'cmd' is ACCT1, ..., ACCT6, and 'data' is the accounting information being checked.
Note: See Step 6 of the Installation instructions in the OS/EM Reference Manual for more information on defining external security.
Enter any descriptive text (optional).
This control will cause OS/EM to convert any EZ-Proclib(R) statements to normal IBM JCLLIB statements.
Figure 106. Convert EZ-Proclib(R) to JCLLIB
|
To enable conversion, enter YES. To leave these statements alone, enter NO.
Any //PROCLIB DD statements, including concatenations, are commented out and replaced by //PROCLIB JCLLIB ORDER= statements.
Wherever possible, OS/EM optional functions are applied on a job class basis. This means that your installation should have a well-defined set of rules for job class usage. Any rules you develop should account for the following:
Figure 107. Jobclass/Jobname Controls
|
This function uses your external security package (RACF, CA-ACF2 or CA-TOPSECRET) to verify users ability to use a particular jobclass either at submission time, execution time or both.
|
Field entry is as follows:
Entering YES specifies that a check be done to ensure that the user is authorized to submit a job in the jobclass used.
Entering NO disables this option.
Entering NORMAL means that RACF will do normal logging for this check. NONE says no RACF logging will be requested.
Entering YES specifies that a check is done when the job is selected for execution to be sure that the user is authorized to execute a job in the jobclass used.
Entering NO disables this option.
Same as 2 above.
The command authorization is done by using classname FACILITY for RACF and CA-ACF2, and classname IBMFAC or DATASET for CA-TOPSECRET. The resource name is JOBCLASS.x where 'x' is the desired jobclass. Each jobclass must be properly defined to your security system.
Note: Jobclass resources that are not defined to the External Security system are always allowed.
This function allows you to control which jobs can execute within a given jobclass. Using job name masks, you can specify that only jobs which match a mask can execute and/or exclude jobs which match a mask.
When you select Job Name Checking Controls, you are presented a scrollable list of job classes.
Figure 109. Job Name Checking Controls
|
Field entry is as follows:
To activate a class, enter an S in the CMD field for that class, and enter YES in the Controls Active column.
Entering YES in this field will enable OS/EM's extended processing.
Entering NO in this field will disable OS/EM's extended processing.
This field must be completed before you will be allowed to leave this panel.
To create a list of INCLUDE job masks, enter an I in the CMD field and press enter. You will be presented with a POPUP panel showing all of the masks previously entered.
Jobs matching an include mask are permitted to execute in the class specified.
Connective between include masks and exclude masks. If you specify AND then the job being submitted must match an include mask and must not match an exclude mask. If you specify OR then the job being submitted may either match an include mask or not match an exclude mask.
To create a list of EXCLUDE job masks, enter an E in the CMD field and press enter. You will be presented with a POPUP panel showing all of the masks previously entered.
Jobs matching an exclude mask are not permitted to execute in the specified class.
Figure 110. Job Name Checking Controls with "POPUP" screen
|
Field entry is as follows:
The following table shows the allowable mask characters:
Qualifier | Description |
? | The question mark is used to unconditionally match any single character (except periods) where the question mark occurs in the specification. Multiples are allowed. |
1 | The ampersand is used to unconditionally match any single alpha character where the ampersand occurs in the specification. Multiples are allowed. |
% | The percent sign is used to unconditionally match any single numeric character where the percent sign occurs in the specification. Multiples are allowed. |
- | The dash is used to unconditionally match any preceding or succeeding character(s). Multiples are allowed. |
Figure 111. Other JCL Controls
|
When a selection is made, the next panel will reflect your choice and the verbiage on the panel will also reflect your choice. In Figure 112, the choice was number 3 and the verbiage reflects DDNAMES. The same panel will appear for the other choices, but the verbiage will be different.
The following parameters are available:
Parm | Description |
Type of address space required When using the CHECK option the resource name to be checked is 'JCL.ADDRSPC.data' where data is the ADDRSPC to be checked. Each ADDRSPC that you want to check must be defined to your security system with appropriate access defined. Read access is required. | |
DFSMS data class When using the CHECK option the resource name to be checked is 'JCL.DATACLASS.class' where class is the DATACLASS to be checked. Each DATACLASS that you want to check must be defined to your security system with appropriate access defined. Read access is required. | |
Data definition names When using the CHECK option the resource name to be checked is 'JCL.DDNAMES.name' where name is the DDNAMES to be checked. Each DDNAMES that you want to check must be defined to your security system with appropriate access defined. Read access is required. | |
Dispatching priority When using the CHECK option the resource name to be checked is 'JCL.DPRTY.data' where data is the DPRTY to be checked. Each DPRTY that you want to check must be defined to your security system with appropriate access defined. Read access is required. | |
Expiration Date When using the CHECK option the resource name to be checked is 'JCL.EXPDT.date' where date is the EXPDT to be checked. Each EXPDT that you want to check must be defined to your security system with appropriate access defined. Read access is required. Note: The EXPDT is normalized to the date format YYYY/DDD before checking. | |
DFSMS management class When using the CHECK option the resource name to be checked is 'JCL.MGMTCLASS.class' where class is the MGMTCLASS to be checked. Each MGMTCLASS that you want to check must be defined to your security system with appropriate access defined. Read access is required. | |
Performance group When using the CHECK option the resource name to be checked is 'JCL.PERFORM.number' where number is the PERFORM to be checked. Each PERFORM that you want to check must be defined to your security system with appropriate access defined. Read access is required. | |
Request RACF protection of a dataset When using the CHECK option the resource name to be checked is 'JCL.PROTECT.yes' where yes is the PROTECT to be checked. Each PROTECT that you want to check must be defined to your security system with appropriate access defined. Read access is required. | |
Job selection priority When using the CHECK option the resource name to be checked is 'JCL.PRTY.number' where number is the PRTY to be checked. Each PRTY that you want to check must be defined to your security system with appropriate access defined. Read access is required. | |
Retention Period When using the CHECK option the resource name to be checked is 'JCL.RETPD.number' where number is the RETPD to be checked. Each RETPD that you want to check must be defined to your security system with appropriate access defined. Read access is required. | |
DFSMS storage class When using the CHECK option the resource name to be checked is 'JCL.STORCLASS.class' where class is the STORCLASS to be checked. Each STORCLASS that you want to check must be defined to your security system with appropriate access defined. Read access is required. | |
Subsystem which will process a dataset When using the CHECK option the resource name to be checked is 'JCL.SUBSYS.name' where name is the SUBSYS to be checked. Each SUBSYS that you want to check must be defined to your security system with appropriate access defined. Read access is required. | |
Specify maximum step execution time When using the CHECK option the resource name to be checked is 'JCL.TIME.time parameter' where time parameter is the TIME to be checked. Each TIME that you want to check must be defined to your security system with appropriate access defined. Read access is required. Note: The only available time parameters are: MAXIMUM, 1440, NOLIMIT, and HIGH. The value HIGH is compared to the time parameter coded in JES PARMS for the executing job class. If the value coded in the JCL for a step is greater than that specified by JES2 the user needs READ access for CHECK. | |
Storage device type When using the CHECK option the resource name to be checked is 'JCL.UNIT.esoteric name' where esoteric name is the UNIT to be checked. Each UNIT that you want to check must be defined to your security system with appropriate access defined. Read access is required. | |
OUTPUT statement for AFP libraries. When using the CHECK option the resource name to be checked is 'JCL.USERLIB.library name' where library name is the USERLIB to be checked. Each USERLIB that you want to check must be defined to your security system with appropriate access defined. Read access is required. |
The following figure is an example of the screen that will appear:
Figure 112. JCL Controls: DDNAME
|
In the discussion which follows, the word parameter is the used instead of the actual parameter name.
Field entry is as follows:
Enter YES to activate DDNAME checking.
Enter NO to deactivate DDNAME field checking.
If you have entered values in the scrollable portion of this panel you may allow their use, disallow use, or security check the value.
If the values have instead been defined to your security system, you may check the external security manager to see if their use is allowed.
This field is the controlling keyword for this function. If you enter allow or disallow, you must enter items in the scrollable section of this panel, as those will be the items which are to be allowed or disallowed.
If you have specified allow, then the JCL parameter from the job being checked will be compared to the listed items. If the value is not found, then the other values field will determine if the value is allowed. See the OTHER values field below for determining the action to be taken if the value is not found.
If you have specified disallow, then the JCL parameter from the job being checked will be compared to the listed items. If the value is listed, then the parameter is disallowed, and the job will receive a JCL error. If the parameter is not listed, then the other values field will determine if the value is allowed. See the OTHER values field below for determining the action to be taken if the parameter is not found.
If you have specified check, and you have listed items in the scrollable portion of the panel, then the JCL parameter will be compared to those items. If a match is found, then the external security manager is checked to determine if the User submitting the job has access to the value listed.
If this value is not defined to your security system, then the undefined to security field will determine the action to be taken.
If this value is not defined in the list, then the other values field determines if the value is allowed.
If you enter check and do not enter any items in the scrollable portion of the panel, the JCL parameter is only checked against the external security manager and the UNDEFINED to security field will determine the action to be taken.
If you have specified check as the controlling keyword, any values which have not been defined to the external security manager will either be allowed or disallowed based on your entry here.
This entry is ignored if the controlling keyword is ALLOW or DISALLOW.
This field specifies the action to be taken whenever a parameter is not found in the list of values displayed in the scrollable portion of the panel.
Specifying allow means that any parameter which was not found in the list of values will be allowed.
Disallow specifies that the parameter which was not found in the list will be disallowed and the job will fail with a JCL error.
Specifying check will cause the parameter not found in the list to be checked via the external security manager.
To list specific data, use the scrollable portion of the panel. To insert a blank line, enter I in the CMD field.
To delete a line, enter a D in the CMD field for the line to be deleted.
To update a line, enter an S in the CMD field and overtype the information to be changed.
Enter the DDNAME to be checked.
The DDNAME checking above will call the External Security manager to determine if the user submitting the job is allowed to use the STEPCAT or JOBCAT DDNAME.
The class name to be checked is FACILITY for RACF and CA-ACF2, and the class name IBMFAC or DATASET for CA-TOPSECRET. The class name to be checked is specified in the OS/EM IEFSSN member of SYS1.PARMLIB during OS/EM system installation.
When using the CHECK option the resource name to be checked is 'JCL.DDNAME.data' where data is the DDNAME to be checked. Each DDNAME that you want to check must be defined to your security system with appropriate access defined. Read access is required.
Description: The STEPLIB option allows you to modify or replace existing STEPLIB DD statements or add a new STEPLIB DD based on job class, job name, user ID, step name or program name. You may optionally fail the job if any of the specified libraries for the STEPLIB are unavailable, or you may allow the job to continue without changing the existing STEPLIBs.
Figure 113. STEPLIB Controls Menu
|
This menu contains entries to update system level control information and the selection lists. Both entries must be selected to initially setup STEPLIB Controls.
Figure 114. System Level Controls Panel
|
Use this panel to enter non-specific information about STEPLIB Controls. The information on this panel is required before any controls dealing with specific STEPLIBs becomes effective.
Field entry is as follows:
Enter YES to turn on STEPLIBs, or NO to disable STEPLIBs on the current system.
The following Wait Options refer to the datasets specified as STEPLIB entities.
Enter YES to allow the system to put the job into a wait state until the dataset is available. Enter NO to disable the wait. If NO is entered, the STEPLIB will not be processed and the FAIL JOB option specified on the selection lists panel will take effect.
Enter YES to allow the system to put the job into a wait state until the volume is available. Enter NO to disable the wait. If NO is entered, the STEPLIB will not be processed and the FAIL JOB option specified on the selection lists panel will take effect.
Enter YES to allow the system to put the job into a wait state until the unit is available. Enter NO to disable the wait. If NO is entered, the STEPLIB will not be processed and the FAIL JOB option specified on the selection lists panel will take effect.
Enter YES to allow the DFHSM recall to be processed. Enter NO to disable DFHSM recalls. If you disable recalls, the STEPLIB will not be processed and the FAIL JOB option specified on the selection lists panel will take effect.
Selection Lists: Up to 32 different sets of libraries and the selection criteria needed for each to be selected may be specified on this panel. You also state by selection group whether the job will fail (via a jcl error) if a specified library is unavailable, or if the job will continue without the STEPLIB being modified, replaced or added.
Figure 115. Selection Lists Panel
|
Field entry is as follows:
To activate a selection group, use the tab key to place the cursor in the Active column and enter a Y.
Tab to the Placement field and enter where you want the dynamic STEPLIB libraries placed. Valid entries are:
Completely replace any existing STEPLIB DDs already in the JCL.
Add the dynamic libraries before any libraries specified in the JCL.
Add the dynamic libraries after any libraries specified in the JCL.
Tab to the Fail Job field and enter either YES or NO. This value determines whether the job will fail with a JCL error if any of the dynamic STEPLIB libraries are unable to be allocated. If it is acceptable that the job execute without the dynamic libraries enter NO here.
You may also enter an optional description line to describe what these dynamic libraries are for.
There are 2 line commands available. Use S to modify the selection lists or D to delete the selection lists.
The Select line command allows you to specify the dynamic library names, enter the selection criteria based on job class, job name, user ID, step name or program name. Masks are acceptable for all but job class and the steplib dataset name.
Figure 116. Selector Entry Panel
|
This panel allows you to specify the dataset names to be used for the STEPLIB as well as the selector types used to match this selector group to the job being checked.
Each selector type may be either an INCLUDE or EXCLUDE list. Types marked as include lists have to match the attributes of the job being checked. Types marked as exclude lists may not match the jobs attributes.
If you fail to specify either include or exclude, the list will be forced to an include type list.
In the scrollable portion of the panel, you enter the selector types and the names or masks for that type. With the exception of the STEPLIB type, you may enter as many names/masks that will fit on the line. If you have more entries than will fit on a line, simply insert a blank line and mark it to be the same selector type and continue entering names/masks. Selector types are JOBCLASS, JOBNAME, PGMNAME, STEPNAME and USERID.
The JOBCLASS entries may be listed individually or as a range. A range is entered as two classes seperated by a colon (:), i.e. F:K.
The STEPLIB selector type is actually the dataset name of the library or libraries which will be used for the STEPLIB. A volume may be specified for datasets which are not cataloged (non-SMS) or to force use of a dataset on a different volume than that which is contained in the catalog. Only one dataset name may be specified per entry. If multiple dataset names are entered they will be concatenated in the order entered.
Any library entered here must currently exist. Standard TSO naming conventions are used, i.e. if prefix is turned on, any name entered without apostrophes will have your TSO ID added as the prefix.
Figure 117. SYSOUT Parameter Controls
|
When a selection is made, the next panel will reflect your choice and the verbiage on the panel will also reflect your choice. In Figure 118, the choice was number 14 and the verbiage reflects SYSOUT. The same panel will appear for the other choices, but the verbiage will be different.
The following parameters are available:
Parameter | Description |
Character arrangement tables When using the CHECK option the resource name to be checked is 'JCL.CHARS.character set' where character set is the CHARS to be checked. Each CHARS that you want to check must be defined to your security system with appropriate access defined. Read access is required. | |
Use 3800 burster-trimmer-stacker When using the CHECK option the resource name to be checked is 'JCL.BURST.YES/NO' where YES/NO is the BURST to be checked. Each BURST that you want to check must be defined to your security system with appropriate access defined. Read access is required. | |
Number of copies of hardcopy output When using the CHECK option the resource name to be checked is 'JCL.COPIES.number' where number is the COPIES to be checked. Each COPIES that you want to check must be defined to your security system with appropriate access defined. Read access is required. | |
Destination for a SYSOUT dataset When using the CHECK option the resource name to be checked is 'JCL.DEST.destination' where destination is the DEST to be checked. Each DEST that you want to check must be defined to your security system with appropriate access defined. Read access is required. | |
Forms control buffer When using the CHECK option the resource name to be checked is 'JCL.FCB.forms buffer' where forms buffer is the FCB to be checked. Each FCB that you want to check must be defined to your security system with appropriate access defined. Read access is required. | |
Forms overlay When using the CHECK option the resource name to be checked is 'JCL.FLASH.flash image' where flash image is the FLASH to be checked. Each FLASH that you want to check must be defined to your security system with appropriate access defined. Read access is required. | |
Form name When using the CHECK option the resource name to be checked is 'JCL.FORM.form name' where form name is the FORM to be checked. Each FORM that you want to check must be defined to your security system with appropriate access defined. Read access is required. | |
AFP form definition name When using the CHECK option the resource name to be checked is 'JCL.FORMDEF.formdef name' where formdef name is the FORMDEF to be checked. Each FORMDEF that you want to check must be defined to your security system with appropriate access defined. Read access is required. | |
Copy modification module When using the CHECK option the resource name to be checked is 'JCL.MODIFY.module name' where module name is the MODIFY to be checked. Each MODIFY that you want to check must be defined to your security system with appropriate access defined. Read access is required. | |
SYSOUT class for the job log, allocation messages and JCL image When using the CHECK option the resource name to be checked is 'JCL.MSGCLASS.SYSOUT class' where SYSOUT class is the MSGCLASS to be checked. Each MSGCLASS that you want to check must be defined to your security system with appropriate access defined. Read access is required. | |
OUTPUT print priority. When using the CHECK option the resource name to be checked is 'JCL.OPRTY.number' where number is the OUTPRTY to be checked. Each OUTPRTY that you want to check must be defined to your security system with appropriate access defined. Read access is required. | |
AFP page definition When using the CHECK option the resource name to be checked is 'JCL.PAGEDEF.name' where name is the PAGEDEF to be checked. Each PAGEDEF that you want to check must be defined to your security system with appropriate access defined. Read access is required. | |
PSF process mode When using the CHECK option the resource name to be checked is 'JCL.PRMODE.name' where name is the PRMODE to be checked. Each PRMODE that you want to check must be defined to your security system with appropriate access defined. Read access is required. | |
Output print class for generated output When using the CHECK option the resource name to be checked is 'JCL.SYSOUT.class' where class is the SYSOUT to be checked. Each SYSOUT that you want to check must be defined to your security system with appropriate access defined. Read access is required. | |
Universal character set When using the CHECK option the resource name to be checked is 'JCL.UCS.character set' where character set is the UCS to be checked. Each UCS that you want to check must be defined to your security system with appropriate access defined. Read access is required. | |
External writer name When using the CHECK option the resource name to be checked is 'JCL.WRITER.writer name' where writer name is the WRITER to be checked. Each WRITER that you want to check must be defined to your security system with appropriate access defined. Read access is required. |
The following figure is an example of the screen that will appear:
Figure 118. JCL Controls: SYSOUT
|
In the discussion which follows, the word parameter is the used instead of the actual parameter name.
Field entry is as follows:
Enter YES to activate SYSOUT checking.
Enter NO to deactivate SYSOUT field checking.
If you have entered values in the scrollable portion of this panel you may allow their use, disallow use, or security check the value.
If the values have instead been defined to your security system, you may check the external security manager to see if their use is allowed.
This field is the controlling keyword for this function. If you enter allow or disallow, you must enter items in the scrollable section of this panel, as those will be the items which are to be allowed or disallowed.
If you have specified allow, then the JCL parameter from the job being checked will be compared to the listed items. If the value is not found, then the other values field will determine if the value is allowed. See the OTHER values field below for determining the action to be taken if the value is not found.
If you have specified disallow, then the JCL parameter from the job being checked will be compared to the listed items. If the value is listed, then the parameter is disallowed, and the job will receive a JCL error. If the parameter is not listed, then the other values field will determine if the value is allowed. See the OTHER values field below for determining the action to be taken if the parameter is not found.
If you have specified check, and you have listed items in the scrollable portion of the panel, then the JCL parameter will be compared to those items. If a match is found, then the external security manager is checked to determine if the User submitting the job has access to the value listed.
If this value is not defined to your security system, then the undefined to security field will determine the action to be taken.
If this value is not defined in the list, then the other values field determines if the value is allowed.
If you enter check and do not enter any items in the scrollable portion of the panel, the JCL parameter is only checked against the external security manager and the UNDEFINED to security field will determine the action to be taken.
If you have specified check as the controlling keyword, any values which have not been defined to the external security manager will either be allowed or disallowed based on your entry here.
This entry is ignored if the controlling keyword is allow or disallow.
This field specifies the action to be taken whenever a parameter is not found in the list of values displayed in the scrollable portion of the panel.
Specifying allow means that any parameter which was not found in the list of values will be allowed.
Disallow specifies that the parameter which was not found in the list will be disallowed and the job will fail with a JCL error.
Specifying check will cause the parameter not found in the list to be checked via the external security manager.
To list specific data, use the scrollable portion of the panel. To insert a blank line, enter I in the CMD field.
To delete a line, enter a D in the CMD field for the line to be deleted.
To update a line, enter an S in the CMD field and overtype the information to be changed.
Enter the SYSOUT to be checked.
The class name to be checked is FACILITY for RACF and CA-ACF2, and the class name IBMFAC or DATASET for CA-TOPSECRET. The class name to be checked is specified in the OS/EM IEFSSN member of SYS1.PARMLIB during OS/EM system installation.
When using the CHECK option the resource name to be checked is 'JCL.SYSOUT.class' where class is the SYSOUT to be checked. Each SYSOUT that you want to check must be defined to your security system with appropriate access defined. Read access is required.
Tape allocation checking occurs as each job step executes. That is, as each tape device is allocated, a count is accumulated and checked against the total limit specified for the job class. Once the total limit is exceeded, the job is cancelled unless operating in WARN mode.
Figure 119. Tape Usage Controls
|
Field entry is as follows:
This field specifies that tape allocation control will be applied according to criteria established with the various CLASS parameters.
This field provides you with a way of observing your tape allocation rules without actually enforcing those rules. WTO messages are produced which will indicate the action OS/EM would have taken if WARN mode were not in effect.
To enable Tape Usage controls for a job class, select the class by placing an S in the CMD field and then overtyping the fields on that line.
Enter YES in the Active column to enable a CLASS.
Enter NO to disable a CLASS.
Each jobclass - A through 9 - can be individually enabled. Enter the total tapes allowed for each type of tape device as well as the total allowed for the step. Each number can be between 0 and 255. The number of 3420, 3480, 3490 3490-VTS and 3590 devices combined may exceed the total number allowed, however no single entry may be greater than the value specified for total tapes.
Figure 120. Virtual Storage Controls
|
Field entry is as follows:
Enter YES to enable Region Controls.
Enter NO to disable Region Controls.
Enter YES to enable Region Override Controls
Enter NO to disable Region Override Controls
This parameter will allow OS/EM to increase the region size of a job if the job asks for a smaller amount than it is allowed.
Enter YES to enable Region Defaults
Enter NO to disable Region Defaults.
This allows you to disable the default entries without disturbing the information already entered.
Specify the default values used by OS/EM for storage utilization control. The values are used if none are specified for regions 1 to 32.
With the exception of Hiperspace/Dataspace values, and above the 2 Gigabyte bar, the numeric values are entered in K. Total spaces is entered as the total number of Hiperspaces/Dataspaces allowed within the region. Default size for Hiperspace is in 4K blocks. Total size of Hiperspace is in Megabytes. Refer to MVS Systems Modifications, SMF exit IEFUSI for further information regarding these parameters.
The first value is the amount of storage, below the 16M line, a program will be given to execute in. A negative value may be entered indicating that the amount of storage is to be calculated by subtracting this value from the size of the private area currently available below the 16M line.
The third value is the amount of storage, above the 16M line, a program will be given to execute in.
The second value is the maximum amount of storage, below the 16M line, a program will be allowed to GETMAIN. A negative value may be entered indicating that the amount of storage is to be calculated by subtracting this value from the size of the private area currently available below the 16M line.
The fourth value is the maximum amount of storage, above the 16M line, a program will be allowed to GETMAIN.
The fifth value is the default size of a Hiperspace or Dataspace when it is created in units of 4K blocks.
The sixth value is the total size of storage that may be used for all user key Hiperspaces and Dataspaces in an address space, in units of 1 Megabyte increments.
The seventh value is the total number of Hiperspaces or Dataspaces allowed within an address space.
The eighth value is the amount of storage a user may obtain above the 2 gigabyte bar up to a maximum of 16 exabytes. The value must be entered with the storage type specified as the last character of the amount, i.e. 16G would indicate 16 gigabytes, and 2P would indicate 2 petabytes. Use M for megabytes, G for gigabytes, T for terabytes and P for petabytes.
For space above the 2G bar, you may enter 0M to indicate that NO space above the bar may be used. Entering 0 without a type modifier indicates that the system default will be used.
A 0 can be entered for any one of the values. This will nullify the previous value. Your installation's MVS default for the specific value being nullified will then apply.
The values specified for the region1 through 32 parameters can be confined to specified job classes, job names, or individual programs.
If there are no program/job/class matches, then the default values entered (if any) will be used.
Enter a number between 1 and 9 for each of the selection list types.
If a job matches more than one region control definition, these weight values will be used to determine which definition to use. In other words, if a job matches region 2 because of the job class, and matches region 3 because of the program name, and the weight value for job class is 8 while the weight for program name is 2, region 2 will be selected because it's weight will be higher.
The optional region parameters (1-32) specify storage values that will be applied to specified program names, job names, or job classes.
To enable an individual region, enter a Y in the Active column, and any numeric values which should be different than the default values.
To enter selection criteria enter an S in the command column and press the ENTER key. Another panel will be displayed where you can specify the selectors for this region definition.
Figure 121. Selector Entry Panel
|
This panel allows you to enter the selection criteria for a region group. You may enter the following types of items:
JOBCLASS | The class of the job being evaluated. |
JOBNAME | The jobname of the job being evaluated. |
PGMNAME | A program that will execute in the job being evaluated. |
Each Selector Type may be either an INCLUDE group or an EXCLUDE group. This means that selectors marked as an include group must match the attributes of the job being evaluated. Selectors marked as an exclude group must not match the attributes of the job being evaluated.
Separate the names or masks with a space. You may enter as many names/masks as will fit on the line. To enter more items, simply insert another line and enter the same selector type keyword.
Multiple names/masks within a selector type are considered to be OR conditions. That is, if any of them are matched the condition is satisfied. Specifying multiple selector types, however, is considered to be an AND condition. All selector types must be satisfied in order for the region control to be assigned to the job being evaluated.
The following table shows the allowable mask characters:
Qualifier | Description |
? | The question mark is used to unconditionally match any single character (except periods) where the question mark occurs in the specification. Multiples are allowed. |
1 | The ampersand is used to unconditionally match any single alpha character where the ampersand occurs in the specification. Multiples are allowed. |
% | The percent sign is used to unconditionally match any single numeric character where the percent sign occurs in the specification. Multiples are allowed. |
- | The dash is used to unconditionally match any preceding or succeeding character(s). Multiples are allowed. |
JOB Controls provides the option for Step end statistics, JOB end statistics, TSO logoff statistics, Surrogate password processing, use of certain functions during TSO submit processing, correction of NOT CATALOGED 2 conditions, control number of concurrently running jobs by user or program, and allow SYSOUT extensions.
The Job Controls Menu presents sixteen selections. Each option presents another selection menu, taking you down the path you have chosen.
Several options are activated for a specific JES2 subsystem. These options show the currently selected JES2 name to the right of the menu item. To specify the JES2 subsystem that the control will effect, set the name using the Set JES2 Name and User Fields menu item ("Set JES2 Name").
|
ADD NOTIFY specifies that a NOTIFY parameter is to be inserted on the job statement if it is missing. The insertion is limited to the classes specified.
Figure 123. Add Notify Statement
|
Field entry is as follows:
Enter YES to have OS/EM add a NOTIFY parameter to a job of it is missing, or enter NO to disable this option.
To make the check universal, enter the class as A:9.
To enter a class, type an A in the CMD field and overtype the class.
To delete a previously entered class, type a D in the CMD field of the class to be deleted.
Figure 124. Control JES2 Commands
|
Field entry is as follows:
The command authorization is done by using classname FACILITY for RACF and CA-ACF2, and classname IBMFAC or DATASET for CA-TOPSECRET. The resource name is JES2.$cmd where 'cmd' is the desired JES2 command.
Each defined command must be a single letter: with four exceptions; $VS, $ADD, $TRACE and $DEL.
If the user is not permitted to any JES2.$cmd resources, the user will not be allowed to include JES2 commands in any submitted jobs. READ authority is required for access to the command.
Figure 125. Control Operating System Commands
|
Entering YES specifies that a check is done to ensure that the user is authorized to submit jobs that contain operating system commands. The user can be limited to only specified commands, all commands, or can be precluded from submitting jobs with any operating system commands.
Specifying NO for this field disables this option.
Enter NORMAL or NONE to control RACF logging.
The command authorization is done by using classname FACILITY for RACF and CA-ACF2, and classname IBMFAC or DATASET for CA-TOPSECRET. The resource name is COMMAND.cmd where 'cmd' is the desired MVS command.
Each defined command must be in its LONG form; i.e., VARY, not V.
If the user is not permitted to any MVS.command resources, the user will not be permitted to include operating system commands in any submitted jobs. READ authority is required.
The Dataset Name Conflict Resolution function prevents jobs from being selected for execution until all needed datasets are available. This prevents a job from taking an initiator when it is actually unable to run because datasets are already in use by another job or user. TSO send messages may optionally be issued to operators, the job owner and/or the owner of the dataset.
Figure 126. Dataset Name Conflict Resolution
|
Field entry is as follows:
Enter yes to activate the function or no to turn the function off.
Enter yes to have OS/EM send a message to the console listing the datasets which are unavailable.
Enter yes to have OS/EM send a message to the jobs owner notifying them that a dataset needed by their job is unavailable.
Enter yes to have OS/EM send a message to the person who has control of the needed dataset.
The Job Step Notify panel allows you to request that OS/EM send a message at the end of each job step for non-zero return codes.
Field entry is as follows:
Enter YES to have a message sent to the user ID specified on the jobcard.
If you leave the minimum return code field blank or enter zero, a message is sent for any non-zero return code. Specifying a number above zero results in a message being sent for any step with that value or higher.
You may control the number of jobs running either by User ID or Program Name.
Figure 127. Job/Program Limits Menu
|
Select the type of job limiting you want to perform.
The JOB LIMITING function enables the customer to control the number of jobs each user may have executing at a single time. This control may be for a single system, all systems in a multiple access spool environment, or specific system IDs. Additionally, the control can be limited to specific job classes, selected job names, certain user IDs and limited by time of day and day of week. Please note that all jobclasses are checked to determine the number of currently executing jobs unless limited by the SCOPE option.
|
The Job Limiting Controls main entry panel has a fixed area in the top half of the panel where you enter global information about the function.
The bottom half of the panel is scrollable, and stores the information regarding the 32 limit selection groups.
Field entry is as follows:
Limiting Controls Active: ___ (Yes/No)
This field is the first field in the fixed portion of the panel. It controls whether any Job Limiting Controls are active or not. Entering NO here will turn off all Job Limiting Controls without disturbing any of the detail information previously entered.
RACF Resource Name: ______________
To allow a user or group of users to bypass the Job Limiting controls enter the name of the RACF profile you have defined in the General Resource FACILITY class (for RACF and ACF-2) or the IBMFAC class (CA-Topsecret). Users to be allowed to bypass this control must have read authority to this profile.
RACF logging: ______ (Normal/None)
If you want RACF logging, enter NORMAL in this field. To turn off RACF logging, enter NONE.
Job Job User Limiting Scheme is: (Yes/No/1-9) Days Class Name ID Liberal: ___, Conservative: ___ or Weight: _ _ _ _
When a job being submitted matches more that one selection group, the Scheme controls which group will be used.
If a scheme of LIBERAL is selected, the selection group which allows the most jobs to execute is used. Conversely, a CONSERVATIVE scheme will use the group which allows the least number of jobs to execute.
You may instead elect to give the different include/exclude lists additional weight by specifying a number from 1 to 9 for the list type. Weights are used when a job matches multiple selection lists. The weight of each list is calculated by adding the weight value specified for each matching include/exclude type. This means that if list 1 has a weight of 9 for user ID and a weight of 2 for the job class the total weight for list 1 would be 12 (1 for simply matching, 2 for job class and 9 for user ID = 12). The list having the highest weight would be used.
Limit Max Jobs Max jobs S Number Active w/other work init idle |---- Active Lists Types ----| _ 1 Y 1 2 INC EXE INC EXE ID
In the above example which shows group 1, you can see that the group is active, only 1 job may execute at a time when other work is waiting to be processed; 2 jobs may execute at a time if an initiator attempts to select work, and nothing else is available to run. There are 4 active lists for the group: an include list for DAYS, an exclude list for JOB CLASSES, an include list for JOB NAMES and an exclude list for USER IDs. The scope of the group is by system IDs.
To update the number of jobs or whether a group is active, simply overtype the fields and press enter. To update a selection group, you must place an 'S' in the select column.
Note: If using Work Load Manager (WLM) initiators, only the Max Jobs Init Idle limit is used as WLM initiators are normally available.
The scrollable portion of the panel contains entries for the 32 available limit selection groups.
After entering an 'S' and updating the number of jobs (if necessary), pressing the enter key will present the SELECTOR ENTRY panel.
The SELECTOR ENTRY panel allows you to specify the type of information jobs will be evaluated against.
Figure 129. Selector Entry Panel
|
MONDAY - SUNDAY | Allows you to specify the time of day, by day of week that JOB LIMITS controls will be active. Enter the time as a range using a 24 hour clock. 8AM to 4PM would be entered as 0800:1600. |
USERID | This list may contain either user IDs which will be INCLUDED, or IDs which are to be EXCLUDED. If you specify an include type group, only those IDs, or ID masks you enter will be affected by job limits. If it is an exclude list, everyone except the entered IDs will be subject to job limits. This is the ID taken from the job card USER parameter which defaults to the ID of the submitter if not present on the job card. |
JOBNAME | This list contains job names or job name masks. Again this list type may be either an include or exclude list. |
JOBCLASS | This list contains any job classes which you may wish to have job limits limited to, or job classes which job limits will not affect. Note: The classes specified here are only to select jobs which may have the limiting controls applied to them. They do not effect how jobs are counted. By default all job classes are checked for executing jobs. You may limit the classes whose jobs are counted with the SCOPE parameter explained below. |
SCOPECLS | This selector type controls which execution classes are to be considered when executing jobs are counted. This selector works in conjunction with the SCOPE Type setting (see below). |
SYSID | If SCOPE Type has been set to ID, you need to list the system IDs using this selector type. |
You also specify the scope of the job limiting controls. The scope types and their function are:
MAS | Multiple Access Spool. Specifying MAS will cause OS/EM to check each system in the MAS for executing jobs before allowing a job to execute. |
LOCAL | Specifying LOCAL will limit the job limiting controls to only the machine where OS/EM is executing. No other system in the sysplex will be checked. |
ID | Enter a selector type of SYSID to specify the 4 character system ID of the LPARs in the MAS you want OS/EM to check for executing jobs. |
Along with SCOPE, you may specify which classes will be used for the purpose of counting the number of active jobs a user may have running. If used, only the jobclasses specified will have their executing jobs counted. Use the selector type SCOPECLS to specify the classes to be checked.
Note: This parameter is the only way to control which jobclasses are used to count the number of executing jobs. If not specified every job executing regardless of jobclass will be counted to determine the number of jobs a user currently has executing.
For example if job class 'A' is the only class listed here, only jobs executing in class 'A' will be counted even if you are limiting jobs in another class. This means that if your are limiting jobs running in job class 'D' and the SCOPE is set for job class 'A', your users will be able to run as many jobs as the want in class 'D' as long as they keep the number of their jobs executing in class 'A' below the limit specified.
It is suggested that if you are limiting jobs that can execute in a particular job class, that you have the matching job class specified in SCOPECLS as well.
The Program Limits function allows you to control how many copies of a program may be run concurrently.
Note: This function must be specified for each JES2 subsystem you have executing to limit programs by LPAR. See "Set JES2 Name".
Figure 130. Program Limits Entry Panel
|
The top portion of this panel contains two fields:
Enter YES to enable this function or NO to disable it.
Enter YES to enable message OS$2LM264 which has the format:
OS/EM sysname pgmname PGM LIMITS(xx xx) SET BY SYSTEM sysname JOB jobname.
The bottom portion of the panel contains a scrollable area listing all the programs which are being controlled.
Two line commands are available for this portion of the panel.
I | Inserts a blank line to allow entry of program information. |
D | Deletes existing program information. |
To control a program, insert a blank line then type the information required:
Active | Enter YES to have this entry controlled. You may mark an existing entry NO to have OS/EM ignore the listed information, but not lose it so you may reactivate the entry at a later time. |
Program | The name of the program to be controlled. |
Local Limit | Enter the number of concurrently executing copies of the program allowed. Leaving this field blank effectively marks the program as having no local limits. |
MAS Limit | Enter the number of concurrently executing copies of the program allowed within the MAS. Leaving this field blank effectively marks the program as having no MAS limits. |
Description | An optional description line possibly used to explain why you are limiting this program (license restrictions; resource hog). |
All programs executing on the system are counted, however only batch jobs will be blocked from execution if the number of running programs is above the limits specified above.
Note: The MAS value is propagated to each system within the MAS which has Program Limiting active when you execute your changes online.
The Job Start Message function of OS/EM sends a message to the TSO ID of the user specified in the NOTIFY parameter of the job card.
The message is sent when the job actually starts executing.
|
Field entry is as follows:
Enter a YES to activate this function.
Enter a NO to deactivate it.
Job/Step Statistics specifies that OS/EM will place job and/or step ending statistics in the job log for the job. The statistics will show the amount of storage the step used; I/O counts by step DDNAME; the elapsed time of the step; return code of the step; etc.
The statistics are controlled by job class (including TSO, TSU, and STC).
You may also specify a text string which will be used to replace the condition code field for any steps which have been flushed by the operating system.
Note: STEPENDSTATS and/or JOBENDSTATS must be enabled to allow printing of Estimated Costs if this option has been selected (See "Estimated Costs Controls".)
Figure 132. Job/Step Statistics
|
Field entry is as follows:
To enable printing of your company's name in the Job/Step Statistics box, enter YES.
Enter NO to disable printing.
Three name lines are allowed. Each name line may contain up to 40 characters each. You may enter any characters you wish.
By default, OS/EM displays 4 dash marks as the condition code for any step which the system has flushed. To have OS/EM print a text string, enter the string here. It may be up to 8 characters long.
Entering YES here and selecting STEPENDWTO below will add the CPU time and I/O counts to the message generated at the end of each step.
Note: This field is ignored unless the STEPENDWTO option is also selected.
Enter the job classes that the selected message will be active. The classes may be entered individually, or as a range (i.e. A:D would be the same as specifying A, B, C and D individually.)
Specifying this option if you wish job abend messages to remain on the console, requiring that the operator specifically delete the message from the console (this will ensure that the operator sees the message before it rolls off the console screen).
Enter YES to activate this option.
Enter NO to deactivate this option.
Specify this option if you wish the operator to be forced to enter the reason a job was cancelled.
Enter YES to activate this option.
Enter NO to deactivate this option.
This option specifies that step end statistics be produced and placed in allocation message log.
Enter YES to activate this option.
Enter NO to deactivate this option.
This option specifies that the return code from each completed job step or job end will be placed in the JES2 message log.
Enter YES to activate this option.
Enter NO to deactivate this option.
Note: If classes TSU or STC are selected the operating system may additionally issue an IEF170I message at execution time for these tasks. This message may be ignored and added to your MPF PARMLIB member or automated operations product for suppression.
This option specifies that job end statistics are to be produced and placed in the allocation message log.
Enter YES to activate this option.
Enter NO to deactivate this option.
This option specifies that the highest step return code for a job, Job CPU time, and elapsed Job time will be placed in the JES2 message log.
Enter YES to activate this option.
Enter NO to deactivate this option.
Note: If classes TSU or STC are selected the operating system may additionally issue an IEF170I message at execution time for these tasks. This message may be ignored and added to your MPF PARMLIB member or automated operations product for suppression.
With this option, OS/EM allows you to avoid receiving Not Cataloged 2 errors. OS/EM will attempt to correct a Not Cataloged 2 error by one of three means:
Figure 133. Not Cataloged 2 Controls
|
Field entry is as follows:
Enter YES to enable OS/EM control of Not Cataloged 2 errors.
Enter NO to disable controls.
WARN mode provides you with a way of observing your Not Cataloged 2 controls without changing the result of a jobs dataset allocation. WTO messages are produced which will indicate the action OS/EM would have taken if WARN mode were not in effect.
To have OS/EM delete an existing file and redrive the catalog request, place a C in the CMD field, enter YES in the ACTIVE field then press the Enter key. A popup window will open to allow you to enter the job classes that the Delete function will be monitor.
Figure 134. Popup Window for Job Class Entry
|
To have OS/EM uncatalog an existing file and recatalog it, place a C in the CMD field, enter YES in the ACTIVE field, then press the Enter key. A popup window will open to allow you to enter the job classes that the Recatalog function will monitor (see Figure 134).
To have OS/EM fail a job, place a C in the CMD field, enter YES in the ACTIVE field then press the Enter key. A popup window will open to allow you to enter the job classes that the Fail function will be monitor (see Figure 134).
Note: You should always specify the FAIL option with both the Delete Files and the Recatalog Files; this insures that if a Not Catalog 2 condition is encountered and OS/EM should fail in correcting the Not Cataloged 2 condition, the Job will be failed with a JCL error.
Figure 135. Reformat Jobcard Account Field
|
Field entry is as follows:
Surrogate password control through OS/EM is intended to supply passwords to jobs submitted by started tasks, TSO users, or other batch jobs so that these jobs can properly access RACF protected datasets.
This panel also controls whether password information will be added to jobs sent over a JES network and whether a password of 1 to 8 question marks found on a jobcard will be replaced with the user's password.
Figure 136. Surrogate Password Control
|
Field entry is as follows:
Password controls activates the optional OS/EM password function.
Enter YES to enable this function.
Enter NO to disable this function.
The OS/EM password dataset consists of one statement per user ID, with the user ID in positions 1-8 and the password in positions 10-17. The user ID is defined to RACF and the password is the RACF password for that user ID. This dataset is user maintained via ISPF EDIT and should be RACF protected to limit access to authorized users.
Use the file attributes: RECFM=FB,LRECL=80,BLKSIZE=0, to allocate the password dataset.
The default password dataset name is SYS1.RACFPASS.
Note: The User ID that is set in the password dataset has to be defined to RACF, and should have its password set up as "PASSWORD NOINTERVAL" in order to insure that the User ID password never expires.
Requests that the OS/EM password dataset be deleted, loaded or reloaded.
Jobs sent over a JES network require password information be present on the jobcard. If you would like OS/EM to check for passwords and insert them if missing, enter YES.
OS/EM can check for question marks on a jobcard and, if found, replace them with the user's password. This allows jobcards to be stored in an unprotected dataset, submitted for execution, and still prevent others from learning a user's password.
Entries for Jobnames, Started Tasks, and TSO Users specify that password insertion processing is to be active for jobs submitted by other jobs.
Enter YES to enable processing.
Enter NO to disable processing.
An optional list of names can be specified which will limit password checking to the named job, STC, or user. If this list is omitted, all STCs jobs or users will have passwords added to the jobs they submit.
Enter JOB, STC or TSU on the command line to access the name lists.
Started task names, job names, and TSO user IDs may be specified with mask characters:
The following table shows the allowable mask characters:
Qualifier | Description |
? | The question mark is used to unconditionally match any single character (except periods) where the question mark occurs in the specification. Multiples are allowed. |
1 | The ampersand is used to unconditionally match any single alpha character where the ampersand occurs in the specification. Multiples are allowed. |
% | The percent sign is used to unconditionally match any single numeric character where the percent sign occurs in the specification. Multiples are allowed. |
- | The dash is used to unconditionally match any preceding or succeeding character(s). Multiples are allowed. |
When a selection is made, the next panel will reflect your choice and the verbiage on the panel will also reflect your choice. In Figure 137, the choice was STC and the panel verbiage reflects STC. The same panel will appear for the other choices, JOB and TSU, but the verbiage will be different.
Figure 137. Job Name List - STC
|
The intended use of this function is to supply passwords to jobs submitted by started tasks, TSO users, or other batch jobs so that these jobs can properly access RACF protected datasets. Your installation might, for example, have a job scheduling system installed. If you run it as a started task, and name it via this command, jobs which this scheduling system submits would be eligible to have passwords added to the JOB statement.
This can avoid some audit and operational exposures associated with every job submitted by the Scheduling System having the highest level of access that is required for one job or system like system backups which require RACF OPERATION privilege.
The password will be added if the submitted job's JOB statement has a USER=userid parameter that matches a USER ID in the OS/EM password dataset.
Typically, you would define one or more user IDs that represent your production jobs. These user IDs would have RACF access to production datasets. Jobs which your scheduling system submits would have a JOB statement that included the USER=userid parameter. The OS/EM password dataset would include statements with these user IDs and their associated RACF passwords. When such jobs are submitted, the appropriate password would be added.
You can create as many user IDs as are necessary within your installation.
The OS/EM password dataset consists of one statement per user ID, with the user ID in positions 1-8 and the password in positions 10-17. The user ID is defined to RACF and the password is the RACF password for that user ID. The dataset is user maintained and should be RACF protected to limit access to authorized users only.
It is your responsibility to keep this password dataset current and correct. OS/EM will use whatever password is indicated for the user ID. If the password is not correct for the user ID, the submitted job will fail with a password violation.
Note: To keep the password dataset maintenance to a minimum, the RACF password for each user ID you define should be specified as NEVER CHANGE.
The Sysout Extension Control function of OS/EM allows you to give extensions to jobs that go over the line limit defined on an OUTLIM JCL parameter, or the JES2 initialization parameters ESTLNCT, ESTPAGE or ESTBYTE. The control can be by jobname, program name, job class or SYSOUT class. It may also be controlled by RACF resource.
There may be up to 32 different extension groups, and you may weight the different classes/names to help the selection of the group which will be used.
Figure 138. Sysout Extension Controls Menu
|
Select either Control by JES2 Parms or Control by OUTLIM parameter to get to the entry panel.
Figure 139. JES2 Sysout Extension Controls
|
Field entry is as follows:
Enter YES to enable JES2 Sysout Extensions, or NO to disable them.
Enter the resource name that has been defined to RACF for extension control. This name must be defined in the general resource FACILITY class for RACF and CA-ACF2 or the IBMFAC class for CA-Topsecret. Users who will be granted extensions must be given READ access to this profile.
Note: This resource is only checked if there are no matches with the selection groups defined below. This includes the DEFAULT group. Therefore if you want to control Sysout Extensions via your Security Manager the DEFAULT group must be left blank.
Enter NORMAL to have RACF logging, or NONE to turn off logging.
Note: If a job does not match any selection group and there is no DEFAULT, and you have not specified a RACF resource name, the job will use the values defined in your JESPARMS member.
---- Extension ---- WTO When WTOR After Lines Pages Bytes Granted How Many RACF: 5000_ 11___ 50000 YES 15 Defaults: 10000 20___ 99999 YES 50 S Ext Active _ 1 Y 15000 30___ 99999 YES 5_ _ 32 N _____ _____ _____ ___ __
Enter the number of lines that the job will be given each time an extension is granted.
Enter the number of pages that the job will be given each time an extension is granted.
Enter the number of bytes that the job will be given each time an extension is granted.
You may elect to have OS/EM send a message to the operator each time an extension is granted. Enter Yes or No in this field.
To ensure that a job doesn't get over looked while extensions are being granted, you may elect to have OS/EM issue a WTOR to force the operator to allow the extension to be granted.
To update these fields for RACF or Defaults or one of the 32 selection groups, simply overtype the previously entered information.
Each of the 32 selection groups may have selection lists attached to them. The different list types are: Job name, Program name, Job class and Sysout class. You must enter the weight that is to be given to each type of list. This weight will be used when a job matches more than one selection group. Enter 1, 2, 3 or 4 for each list type.
For job name and program name, you either specify the exact name or use a name mask.
Job classes and SYSOUT classes may be entered as individual classes or as a range, i.e. 1:4 specifies classes 1, 2, 3 and 4.
To update any of these lists, simply enter an S in the select column and press enter. Another panel will be presented to allow update of the selectors.
Figure 140. JES2 Sysout Extension Selectors
|
This panel is displayed each time one of the selection groups is selected for update.
Each list type may be either an include list or an exclude list. For example if you have an include list of job classes which contains 6:9, only jobs running in class 6, 7, 8 or 9 will be selected by this extension group. If it were an exclude list, jobs in class A through 5 would be eligible.
The allowable selector types are:
JOBCLASS | Enter jobclass as individual classes or a class range. A range is two classes separated by a colon (:). |
JOBNAME | Jobnames may be entered as individual jobnames, or you may use jobname masks. |
PGMNAME | Program names may be entered as individual programs or you may use program name masks. |
SYSOUT | Enter sysout classes as individual classes or a class range. A range is two classes separated by a colon (:). |
For any of these lists, you may enter as many items as needed on a line separated with spaces. If you have more items than will fit on a single line, simply insert another line and use the same selector type.
Figure 141. Sysout Extension Controls
|
Field entry is as follows:
Enter YES to enable Sysout Extensions, or NO to disable them.
Enter the resource name which has been defined to RACF for extension control. This name must be defined in the general resource FACILITY class for RACF and CA-ACF2 or the IBMFAC class for CA-Topsecret. Users who will be granted extensions must be given READ access to this profile.
Enter NORMAL to have RACF logging, or NONE to turn off logging.
Extension WTO When WTOR After Lines Extension Granted How Many RACF: 1500_ YES 15 Defaults: 1000_ NO_ 15 S Ext Active _ 1 Y 15000 YES 5_ _ 32 N _____ ___ __
Enter the number of lines that the job will be given each time an extension is granted.
You may elect to have OS/EM send a message to the operator each time an extension is granted. Enter Yes or No in this field.
To ensure that a job doesn't get over looked while extensions are being granted, you may elect to have OS/EM issue a WTOR to force the operator to allow the extension to be granted.
To update these fields for RACF or Defaults, simply overtype the previously entered information.
Each of the 32 selection groups may have selection lists attached to them. The different list types are: Job name, Program name, Job class and Sysout class. You must enter the weight that is to be given to each type of list. This weight will be used when a job matches more than one selection group. Enter 1, 2, 3 or 4 for each list type.
For job name and program name, you either specify the exact name or use a name mask.
Job classes and SYSOUT classes may be entered as individual classes or as a range, i.e. 1:4 specifies classes 1, 2, 3 and 4.
To update any of these lists, simply enter an S in the select column and press enter. Another panel will be presented to allow update of the different lists.
Figure 142. Sysout Extension Lists
|
This panel is displayed each time one of the selection groups is selected for update.
Each list type may be either an include list or an exclude list. For example if you have an include list of job classes which contains 6:9, only jobs running in class 6, 7, 8 or 9 will be selected by this extension group. If it were an exclude list, jobs in class A through 5 would be eligible.
The allowable selector types are:
JOBCLASS | Enter jobclass as individual classes or a class range. A range is two classes separated by a colon (:). |
JOBNAME | Jobnames may be entered as individual jobnames, or you may use jobname masks. |
PGMNAME | Program names may be entered as individual programs or you may use program name masks. |
SYSOUT | Enter sysout classes as individual classes or a class range. A range is two classes separated by a colon (:). |
For any of these lists, you may enter as many items as needed on a line separated with spaces. If you have more items than will fit on a single line, simply insert another line and use the same selector type.
TSO Logoff Statistics specifies that OS/EM will display TSO session statistics at logoff time.
|
Field entry is as follows:
To enable TSO Logoff Statistics enter YES, to disable, enter NO.
Enter the number of seconds to display the TSO Logoff Statistics. Valid entries are 1 to 99. This field is required if you entered YES above.
Figure 144. Verify User Defined to RACF
|
Field entry is as follows:
USERID/JOBNAME check specifies that the first characters of the jobname matche the TSO USER ID. The check is limited to the classes specified.
Figure 145. Verify UserID with Jobname
|
Field entry is as follows:
Enter YES to cause OS/EM to verify User ID to Jobnames. Enter NO to turn this option off.
Enter the number of characters of the User ID that will be used to compare it to the jobname. If blank, the full length of the User ID will be used.
Enter the name of the RACF resource which has been defined to control this option. Leave blank to have control strictly by class.
This resource must be defined in the general resource FACILITY class for RACF and CA-ACF2 or IBMFAC for CA-Topsecret. Users must have READ access to this profile to allow job submission using a job name not matching their TSO ID. Only job classes specified below will be security checked.
Enter NORMAL to have RACF logging, or NONE if logging is not required.
To make the check universal enter the class as A:9.
To enter a class, type an A in the CMD field and overtype the class.
To delete a previously entered class, type a D in the CMD field of the class to be deleted.
If a security resource name is supplied a RACHECK will be performed for any job submitted in the specified class(es).
The optional Job Routing function allows job routing between CPU's in a JES2 MAS based on defined resource names and their availability. Use the $QA and $QD commands to manage resource names on each system running OS/EM Job Routing (See Appendix F, "JES2 Commands for Job Routing") The routing may be controlled by JCL statements placed within the JOBLIB member, or by specifying routing control information through the OS/EM Job Routing Controls function.
Note: There may be a maximum of 127 routes per job. This is a combination of JCL statements and OS/EM automated routing.
This function may also be used to change the Workload Manager service class that would normally be assigned to a job, override the specified jobclass or priority specified on the jobcard and change the Workload Manager scheduling environment.
Note: If a valid JES2 NODE is found on a /*ROUTE XEQ nodename card the job is routed to the specified node before OS/EM has access to the job. Therefore no OS/EM changes will be processed on the system where the job was submitted. If OS/EM job routing is active on the node where the job was routed, OS/EM changes will be effective there.
Refer to the EXIT5 section of the JES2 command in the OS/EM Reference Manual for more information on this function.
The $HASP message numbers produced by the OS/EM implementation of the Mellon Modifications may also be changed. This feature is provided for customers who would like to see the original Mellon message numbers. Although Mellon had originally reused IBM message numbers, the OS/EM implementation tries to avoid this where possible. This feature allows you to specify the message number you want to appear for selected messages.
Note: The Job Routing Communications dataset must be on DASD shared by each JES within the MAS, and must be a unique dataset for each MAS or independent system running Job Routing. Failure to have unique datasets will result in unpredictiable results/failures. Additionally the Job Routing function must be enabled on each LPAR within a MAS concurrently. Failure to do so will result in jobs not being allowed to execute on LPARs where Job Routing is active if they have been through the interpreter on a LPAR without Job Routing. Conversely, LPARs within the MAS without Job Routing active may select jobs for execution without the specified resources.
Figure 146. Job Routing/Classing Controls Menu
|
Each of these paths is presented in the following sections:
This panel allows you to completely turn off the Job Routing Controls without deleting any of the control information previously entered. You also specify the name of the dataset containing your resource name information here.
Figure 147. System Level Controls
|
Data entry is as follows:
Enter the dataset name of the sequential file which will store the resource name information.
Each CPU in the sysplex must share this dataset. The dataset format must be: Physical Sequential, Record format of F and have a logical record length of 4504. The dataset requires three (3) tracks.
Specify either YES or NO here.
No OS/EM routing functions will be available unless this field is marked YES.
Specify either YES or NO here.
If you enter NO but enter YES for Job Routing, only routing via JECL will be active. This field controls automatic routing as well as the ability to change various routing information.
To have a resource name attached to any job which either does not have an OS/EM routing card (/*ROUTE XEQ resource) in the JCL or does not match an automatic routing group simply enter the resource name here.
To disable a default resource previously specified, simply blank out this field.
Specifying YES will cause OS/EM to scan for the keyword SCHENV= on the JOBCARD statement and remove it. It then inserts an OS/EM Job Routing JECL statement using the scheduling environment name just removed as the resource name.
Note: If you are using OS/EM Job Routing to assign a Scheduling Environment based on some selection criteria it will still be assigned, as that processing occurs after any original SCHENV keyword has been converted to a route statement. This means that your jobs could end up having a route statement with the original scheduling environment name as the resource and a SCHENV keyword generated based on your selection criteria.
Specifying YES will cause OS/EM to set a job's system affinity (SYSAFF) to ANY, if, and only if, the job has been assigned one or more OS/EM Job Route resources. The job route resources may be from either JECL control cards (/*ROUTE XEQ resource) or automatically generated.
Note: You may use both SCHENVCONVERT and SYSAFFANY. The system affinity change occurs after any schedule environments have been replaced with ROUTE statements.
Additional notes:
This option allows you to specify system wide defaults for the various JECL statements used for job routing and scheduling.
|
Field entry is as follows:
The CNTL and THREAD statements work identically. Each may have it's own default action. To have exclusive control of a resource or dataset specified with this statement, use the EXC default setting. To allow shared access as the default, enter SHR.
Each of the remaining JECL statements are processed in the same way. The Impossible Job parameter is not available for the BEFORE or EXCLUDE statements.
This parameter allows you to specify the default action to be performed if the specified job is not in the execute queue. The available options are ignore, fail or wait.
This parameter allows you to specify the action to be taken if the specific job (job name with job number) is not in the execute queue. The available options are ignore or fail.
This parameter allows you to specify the default action to be performed if there are multiple matching job names in the system. The available options are ignore, fail or OK.
This parameter allows you to specify the default action to be taken if the specific job name and job number specified has already left the execution queue. i.e. the request of AFTER, PRED or WITH is impossible to fulfill. The available options are ignore, cancel or hold.
The meanings of the different options are:
Indicates that the job will be cancelled. i.e. $PJOBxxxx.
Indicates that the job is to be failed by passing a return code of 12 back to JES2.
Indicates that the job will be placed on hold. Operator intervention will be required to release or cancel the job.
Indicates that the card is to be treated as a comment.
Indicates that the statement will apply to all jobs with the specified jobname.
Indicates that the job should wait for the specified jobname to be read into the system.
This option is provided for backward compatibility with the original version of the Mellon Modifications. It is primarily intended for customers who feel that having the original Mellon Message Numbers appear will avoid any confusion with previously trained staff, and allow the continued use of any automated operations package that has already been setup to expect specific message numbers.
Figure 149. Mellon Message Substitution
|
When this function is selected a scrollable list of the messages issued by the Mellon Modifications is displayed. The message text displayed is not available for update, it is simply to aid in the identification of the message numbers to change.
The input fields are as follows:
Enter YES or NO here. Entering NO allows you to turn message substitution off without having to modify each message previously defined.
Enter an S in the Sel column for any message number to be overridden.
Tab to the Replacement MSG number column and enter the three digit number to be used in place of the OS/EM message number.
To revert back to the OS/EM number, simply blank out any previously entered replacement number.
This function allows you to specify up to 999 different sets of routing rules. The 999 rule sets are shared between normal resource routing, changing jobclass/priority, changing scheduling environments, changing the service class or changing the execution node.
These rules are searched sequentially and attached to the job in the order processed.
Figure 150. Job Routing Resource Groups Entry Panel
|
There are 5 line commands available:
To add or change the selection criteria for a resource, you must enter an S in the Sel column, then press enter. Another window will open where you can specify the selector type (such as DDNAME, UNITNAME, RACFGROUP), and the selector names or masks. (See Figure 151 for an example of this panel.) Resource names may be up to 44 characters long.
To clear a resource group, enter a D in the Sel column, then press enter. When you use this command and there are selection criteria attached to the resource, another panel will be displayed showing the selection criteria and warning you that they will be deleted if you continue. Press PF3 (end command) to process the deletion, or enter CANCEL on the command line to cancel the delete process. (See Figure 152 for an example of this panel.)
Use the copy line command along with the (O)ver line command to copy an existing resource group over an empty group. All of the existing selection criteria is copied along with the resource group.
Use the move line command along with the (O)ver line command to move an existing resource group over an empty group. All of the existing selection criteria is moved along with the resource group.
Figure 151. Selector Entry Panel.
|
This panel allows you to enter the selection criteria for a resource group. You may enter the following types of items:
ACCOUNT | The account number or mask on the JOBCARD. |
DDNAME | The DDNAME or mask of any DD statement. |
DSNAME | A dataset name or mask. |
EXECPARM | A PARM field or mask found on an EXEC statement. |
JOBCLASS | The class of the job. |
JOBNAME | A job name or mask. |
JOBTIME | Time value from jobcard. The time parameters are entered as MMMMMM.SS, where MMMMMM may be from 0 to 357,912 minutes and SS may be from 0 to 59 seconds. JOBTIME may also be entered as a range by separating the beginning and ending values with a colon (:), i.e. 0.10:2.0 would specify a range beginning with zero minutes and 10 seconds to two minutes and zero seconds. Leading and trailing zeros may be dropped, i.e. the above could also be entered as .10:2 for ten seconds to two minutes. |
PGMNAME | A program name or mask found on an EXEC statement. |
RACFGROUP | A RACF group or mask. |
SCHENV | The workload manager scheduling environment. |
SERVCLS | The workload manager service class. |
SRCNAME | The user ID or job name that submitted the job. |
SRCPRGM | The program name that submitted the job. |
SRCTYPE | The source type, either JOB, TSU or STC. |
UNITNAME | The name of a unit on a DD statement. |
USERID | The user ID associated with the job. |
Note: If JOBCLASS or SERVCLS is used for job routing, unpredictable results will occur if the class is changed after the job has been submitted.
Separate the names or masks with a space. You may enter as many names/masks as will fit on the line. To enter more items, simply insert another line and enter the same type.
Note: The selector types ACCOUNT and EXECPARM only allow use of one item. In other words if you wanted jobs with execution parms of 'IMSP-' and 'IMSD-' to be routed to the same system, you would have to code two separate selection groups, one for each parm.
Multiple names/masks within a selector type are considered to be OR conditions. That is, if any of them are matched the condition is satisfied. Specifying multiple selector types, however, is considered to be an AND condition. All selector types must be a match in order for the resource name to be assigned to the job.
For example, if the following entries were used:
Selector Sel Type Selector Name/Mask List _ DDNAME SYSUT2 SPECIAL _ JOBNAME TSYS- TDEV-
Then only those jobs which had jobnames beginning with TSYS or TDEV that also had a DDNAME of SPECIAL or a DDNAME of SYSUT2 in their JCL streams would be assigned that particular Resource Name.
On the other hand, if you only wanted TSYS jobs with a SYSUT2 DDNAME or only those TDEV jobs with a SPECIAL DDNAME to be assigned the Resource Name, you could code two Routing Groups and specify the same Resource Name for both.
Routing Group Resource Sel Group No Active Name Resource Description _ 1 Y RES_TEST TSYS jobs with DDNAME SYSUT2___ _ 2 Y RES_TEST TDEV jobs with DDNAME SPECIAL__ _ 3 _ ____________________________________________
Routing group 1 would have selector entries like:
Selector Sel Type Selector Name/Mask List _ DDNAME SYSUT2_____________________ _ JOBNAME TSYS-______________________
Routing group 2 would have selector entries like:
Selector Sel Type Selector Name/Mask List _ DDNAME SPECIAL____________________ _ JOBNAME TDEV-______________________
In this way, TDEV jobs with a SYSUT2 DDNAME and TSYS jobs with a SPECIAL DDNAME would not be assigned the resource as they would in the first example.
There are two line commands available for this panel.
Use this line command to delete an invalid or obsolete line.
Use this line command to insert a blank line into the display. Then enter the resource type and the names/masks for that type.
The display is sorted by selector type, so even if you insert a line at the end of the display, when next entered the panel will have been resorted.
Figure 152. Delete Warning Panel.
|
This panel will be displayed showing the selection criteria for a resource group before the group is deleted. If you are sure you want the group deleted, simply press the PF3 key or enter the END command. To continue without deleting the group, enter the CANCEL command.
This function allows you to specify up to 999 different sets of rules to allow changing the Jobclass or priority parameter. The 999 rule sets are shared between normal resource routing, changing jobclass/priority, changing scheduling environments, changing the service class or changing the execution node.
The rules are processed sequentially and attached to the job in the order processed. Because of this the last matching rule will be the one that actually sets the jobclass and/or priority.
Note: You must enable Automatic Routing. See "System Level Controls".
Figure 153. Jobclass/Priority Change Groups Entry Panel
|
The fields and their meanings are listed below:
To add or change the selection creteria for a resource, you must enter an S in the Sel column. Another window will open where you can specify the resource type (such as DDNAME, UNITNAME, RACFGROUP) and the resource names or masks. (See Figure 151 for an example of this panel.)
If you are just changing the priority, leave the class field blank. Conversely, if you are just changing the class, leave the priority field blank.
To clear a resource group, enter a D in the Sel column. When you use this command and there are selection creteria attached to the resource, another panel will be displayed showing the selection creteria and warning you that they will be deleted if you continue. Press PF3 (end command) to process the deletion, or enter CANCEL on the command line to cancel the delete process. (See Figure 152 for an example of this panel.)
Use the copy line command along with the (O)ver line command to copy an existing resource group over an empty group. All of the existing selection criteria is copied along with the resource group.
Use the move line command along with the (O)ver line command to move an existing resource group over an empty group. All of the existing selection criteria is moved along with the resource group.
To change or add a description, simply type into the field.
This function allows you to specify up to 999 different sets of rules to allow changing the Workload Manager SCHENV parameter. The 999 rule sets are shared between normal resource routing, changing jobclass/priority, changing scheduling environments, changing the service class or changing the execution node.
The rules are processed sequentially and attached to the job in the order processed. Because of this the last matching rule will be the one that actually sets the scheduling environment.
Note: You must enable Automatic Routing. See "System Level Controls".
Figure 154. Scheduling Environment Change Groups Entry Panel
|
This panel is a scrollable list of the 999 groups available.
The fields and their meanings are listed below:
To add or change the selection creteria for a resource, you must enter an Sin the Sel column. Another window will open where you can specify the resource type (such as DDNAME, UNITNAME, RACFGROUP) and the resource names or masks. (See Figure 151 for an example of this panel.)
To clear a resource group, enter a D in the Sel column. When you use this command and there are selection creteria attached to the resource, another panel will be displayed showing the selection creteria and warning you that they will be deleted if you continue. Press PF3 (end command) to process the deletion, or enter CANCEL on the command line to cancel the delete process. (See Figure 152 for an example of this panel.)
Use the copy line command along with the (O)ver line command to copy an existing resource group over an empty group. All the existing selection criteria is copied along with the resource group.
Use the move line command along with the (O)ver line command to move an existing resource group over an empty group. All of the existing selection criteria is moved along with the resource group.
To change or add a description, simply type into the field.
This function allows you to specify up to 999 different sets of rules to allow changing the Workload Manager SRVCLASS parameter. The 999 rule sets are shared between normal resource routing, changing the jobclass/priority, changing scheduling environments, changing the service class or changing the execution node.
The rules are processed sequentially and attached to the job in the order processed. Because of this the last matching rule will be the one that actually sets the service class.
Note: You must enable Automatic Routing. See "System Level Controls".
Figure 155. SRVCLASS Change Groups Entry Panel
|
This panel is a scrollable list of the 999 groups available.
The fields and their meanings are listed below:
To add or change the selection creteria for a resource, you must enter an Sin the Sel column. Another window will open where you can specify the resource type (such as DDNAME, UNITNAME, RACFGROUP) and the resource names or masks. (See Figure 151 for an example of this panel.)
To clear a resource group, enter a D in the Sel column. When you use this command and there are selection creteria attached to the resource, another panel will be displayed showing the selection creteria and warning you that they will be deleted if you continue. Press PF3 (end command) to process the deletion, or enter CANCEL on the command line to cancel the delete process. (See Figure 152 for an example of this panel.)
Use the copy line command along with the (O)ver line command to copy an existing resource group over an empty group. All of the existing selection criteria is copied along with the resource group.
Use the move line command along with the (O)ver line command to move an existing resource group over an empty group. All of the existing selection criteria is moved along with the resource group.
To change or add a description, simply type into the field.
This function allows you to specify up to 999 different sets of rules to allow changing the execution node parameter. The 999 rule sets are shared between normal resource routing, changing the jobclass/priority, changing scheduling environments, changing the service class or changing the execution node.
The rules are processed sequentially and attached to the job in the order processed. Because of this the last matching rule will be the one that actually sets the execution node.
Note: You must enable Automatic Routing. See "System Level Controls".
Figure 156. XEQ Node Change Groups Entry Panel
|
This panel is a scrollable list of the 999 groups available.
The fields and their meanings are listed below:
To add or change the selection creteria for a resource, you must enter an Sin the Sel column. Another window will open where you can specify the resource type (such as DDNAME, UNITNAME, RACFGROUP) and the resource names or masks. (See Figure 151 for an example of this panel.)
To clear a resource group, enter a D in the Sel column. When you use this command and there are selection creteria attached to the resource, another panel will be displayed showing the selection creteria and warning you that they will be deleted if you continue. Press PF3 (end command) to process the deletion, or enter CANCEL on the command line to cancel the delete process. (See Figure 152 for an example of this panel.)
Use the copy line command along with the (O)ver line command to copy an existing resource group over an empty group. All of the existing selection criteria is copied along with the resource group.
Use the move line command along with the (O)ver line command to move an existing resource group over an empty group. All of the existing selection criteria is moved along with the resource group.
To change or add a description, simply type into the field.
The Miscellaneous Controls Menu provides access to the ACF2 Non-cancel Override, Catalog Account Control, Estimated Costs Controls, TSO Program Intercept and WTO functions.
Figure 157. Miscellaneous Controls Menu
|
Each of these functions is presented in the following sections:
The ACF2 Non-cancel Override allows OS/EM to enforce controls previously setup for ACF2 users who have the non-cancel attribute.
Figure 158. ACF2 Non-cancel Override Entry Panel
|
Field entry is as follows:
Enter YES to activate this function, or NO to deactivate it.
The Catalog Account Control function will place up to 32 characters of JOB or STEP accounting information into the catalog record for a new VSAM or SMS-managed non-VSAM data set. The Access Methods Services program, DCOLLECT, can then be used to produce charge-back reports for DASD utilization.
If a catalog account field is already present, e.g., it was specified on the IDCAMS DEFINE statement, it is not replaced. If both JOB and STEP accounting information is present, STEP accounting takes precedence.
You must enable SMF record type 61 in your SMFPRMxx parmlib member for this function to operate.
Refer to the IEFU83 section of the SMF command in the OS/EM Reference Manual.
The Catalog Account entry panel provides the means for tailoring the function to an installation's specific needs. The information to be selected from either a JOBs accounting fields or a STEPs accounting fields and placed in the 32-byte catalog account field can be controlled by up to eight subfield entries.
These subfield entries specify an account field, a starting position within the account field and the number of characters to select. This allows for selecting all or parts of up to eight different accounting fields or eight parts of a single accounting field or any combination in between. The subfields are processed from left-to-right and the information obtained from processing each is placed in the catalog account field in left-to-right order as well.
If a subfield requests an account number that is not present, or there is not enough data in the account field to satisfy the requested subfield length, provision is made to use an error character to fill in the gap. If the subfield length is zero, the subfield is skipped. If the combined subfield lengths are greater than 32 characters, only the first 32 are used.
There is a DEFAULT group of subfield entries and provision for up to 16 additional groups of subfield entries that are used in conjunction with selection criteria such as JOBNAME, JOBCLASS, USERID and RACF GROUPNAME. Weights can be assigned to the selection criteria so that if a match is made on more than one criterion, the one with the highest weight assigned is the subfield selection group used. If weights are not assigned or are equal, the first group (from 1 to 16) that satisfies the criteria is used. If no criteria are met, the default selection group is used.
The primary entry panel for the Catalog Account Controls function allows you to turn the function on or off; specify an error fill code to be used for a missing or short account field; specify the name of the started task(s) which handle recalling or recovering datasets which have been migrated or backup up; specify the default selection values to build the accounting code if a job does not match any of the 16 selection groups; specify an Owner ID to be used if missing from the ACEE; specify weights to be applied to the different selection criteria which may be specified.
Figure 159. Catalog Account Control Entry Panel
|
Field entry is as follows:
Enter YES to activate this function, or NO to deactivate it.
Specify a one byte character to be used to fill any missing or short account field.
Specify the name of the started task which handles recalling or recovering datasets which have been migrated or backed up. Two names may be entered. If you are running IBM's DFSMSHSM, this name might be HSM. This is a required field so that OS/EM does not try to add accounting information to datasets being recalled from migration or recovered from a backup.
The Defaults fields builds the account number from the accounting codes of jobs not specifically selected by one of the 16 selection groups.
You may specify up to 8 accounting code fields to build the account number. Specify the accounting code field, the starting position within the field and the length to use. Also specify an Owner ID to be used in the event that the ACEE is unavailable.
Note: If an accounting code field is specified as 0, the first non-zero length accounting code field will be used.
Enter the weight to be given to each type of selection criteria. Enter a value between 1 and 4. The weight is used when a JOB matches more than one selection group. The group with the highest weight value is used. If more than one group is determined to have the same weight, the first group selected will take effect.
This means that if the JOB being tested matches a selection group based on the JOBNAME, but also matches a selection group based on the RACF Group Name, then which group has the highest value assigned to it's weight value will be used.
The bottom portion of the panel is a scrollable area containing the 16 possible selection groups. Enter the account code field number, the starting position within the field and the length of the field to be moved. You may specify up to eight areas. They may be the same or different field numbers.
To delete a field entry no longer needed, you must blank out the field, start and length fields.
To the right of the panel are four indicators showing the type of selection list, Job Name, User ID, Job Class or RACF Group. An I means that it is an INCLUDE group, while an E means that it is an EXCLUDE group.
To make the list active, enter a Y in the ACTIVE column. Enter a N to turn the list off without having to blank out the entries.
To update the selection list types, place a S in the S column and press enter.
Figure 160. Catalog Account Lists
|
This selection panel allows you to specify the selector types active for this catalog account selection group, as well as specifying the list type, either INCLUDE or EXCLUDE.
To enter selector entries, use the I command to insert a blank line, then enter the Selector Type and the names or masks that will be checked. Separate the names with a space. If you have more names/masks than will fit on a line, simply insert another blank line and use the same selector type.
When entering jobclasses, you may enter a range of classes by separating the first and last class with a colon (:), i.e. A:D will cause OS/EM to check class A, B, C and D.
The four selector types available are:
Enter either complete job names, or job name masks. If the list type is INCLUDE, only these jobs will be processed by this control definition. If the list type is EXCLUDE, all jobs but these will be processed by this definition.
Enter either individual job classes, or a range of classes by specifying the range with a colon (:) in between the beginning and ending classes. If the list type is INCLUDE, only these classes will be eligible for processing by this definition. For an EXCLUDE list, all classes except for these will be processed.
The jobclass list specifies the execution classes to which OS/EM will apply catalog account controls.
Enter individual classes, or enter a class:class pair. Entering A:9 as the jobclass covers all possible classes.
Jobclass lists are built, independently, for each selection group.
Enter either complete RACF group names, or name masks. If the list type is INCLUDE, only these groups will be processed by this control definition. If the list type is EXCLUDE, all groups except these will be processed.
The RACF GROUP NAMES list specifies the RACF groups to which OS/EM will apply catalog account controls.
RACF group lists are built, independently, for each selection group.
Enter either complete User IDs or ID masks. If the list type is INCLUDE, only these IDs will be processed by this control definition. If the list type is EXCLUDE, all IDs except these will be processed.
The USERID list specifies the User IDs to which OS/EM will apply catalog account controls.
User ID lists are built, independently, for each selection group.
Note: The following restrictions apply:
The Estimated Cost function of OS/EM can be used to calculate an approximate charge for running each step of a job and an approximate total cost of running the job. The costs are presented in the flower box produced by requesting OS/EM's STEP/JOB-end statistics (see "Job/Step Statistics".)
The Estimated Cost Controls panel provides the means for tailoring the computation of estimated cost to an Installation's specific needs. There are twelve selectable rate fields that can be specified as multipliers against usage measurements such as service units, CPU time, I/O activity and tape mounts. Additionally, field entries are provided for a CPU time normalization factor, a fixed cost that is added to each job's total cost, and a default minimum cost of a job that will be used if the calculated cost is lower. Up to sixteen separate sets of rates can be specified based upon System ID. A default set of rates can also be specified that will be used against work run on any LPAR for which there is no specific System ID set of rates.
Which rate or rates to specify is up to each individual installation. If an installation wishes to compute an estimated cost based upon TCB CPU time only, then only that rate field needs to be entered. A value of zero for a rate negates the use of that rate in the cost calculation. The computed values for each rate/usage measurement combination (rounded up to two decimal places) are added together to arrive at an estimated cost. If a fixed cost value is specified it will be added to the job's total cost. If a minimum cost value is specified it will be used for a job's total cost if the calculated value is lower.
Figure 161. Estimated Cost Groups Panel
|
Field entry is as follows:
Enter YES to activate this function, or NO to turn it off.
Even if this function is turned on nothing will happen unless there is at least one non-zero field entry in either the Default selection or one of the 16 System ID selection groups, and the STEP/JOB-end statistics function is activated.
Entering YES brings up a panel that allows a set of default rates to be entered that will be used if there is no specific set of rates for the System ID the job is running on. (See "Estimated Costs Controls".)
The bottom portion of this panel contains a scrollable list of the sixteen (16) available rate groups. Each entry must specify the SMFID of the system to which the rates apply and for the rates to be active you must enter a Y in the Active column. An optional line of descriptive text may also be entered.
Below is a sample of the rates data entry panel along with a description of each field.
Figure 162. Estimated Cost Controls Panel
|
Field entry is as follows:
A value of the form xxxxx.xx. If specified, it will be used as the cost of a job when the calculated cost is lower.
A value of the form xxxxx.xx. If specified, it will be added to the value calculated for a job.
A rate value of the form .xxxxxxx specifying the cost of a TCB service unit. The number of TCB service units in the SMF type 30 record field, SMF30CSU, is multiplied by this rate to obtain the cost.
A value of the form .xxxxxxx specifying the cost of an I/O service unit. The number of I/O service units in the SMF type 30 record field, SMF30IO, is multiplied by this rate to obtain the cost.
A rate value of the form .xxxxxxx specifying the cost of a SRB service unit. The number of SRB service units in the SMF type 30 record field, SMF30SRB, is multiplied by this rate to obtain the cost.
A value of the form .xxxxxxx specifying the cost of an MSO service unit. The number of MSO service units in the SMF type 30 record field, SMF30MSO, is multiplied by this rate to obtain the cost.
A value of the form xx.xxxxx specifying the cost of a TCB CPU second. The number of TCB CPUY seconds in the SMF type 30 record field, SMF30CPT, is multiplied by this rate to obtain the cost. If a normalization factor is specified, the cost calculated will be multiplied by the factor.
A value of the form xx.xxxxx specifying the cost of a SRB CPU second. The number of SRB CPU seconds in the SMF type 30 record field, SMF30CPS, is multiplied by this rate to obtain the cost. If a normalization factor is specified, the cost calculated will be multiplied by the factor.
A value of the form xx.xxxxx specifying the cost of a disk I/O. The number of disk I/Os contained in the SMF type 30 record field, SMF30BLK (when the SMF30DEV field indicates DASD), is multiplied by this rate to obtain the cost.
A value of the form xx.xxxxx specifying the cost of a tape I/O. The number of tape I/Os contained in the SMF type 30 record field, SMF30BLK (when the SMF30DEV field indicates tape), is multiplied by this rate to obtain the cost.
A value of the form xx.xxxxx specifying the cost of a virtual I/O. The number of virtual I/Os contained in the SMF30 record type field, SMF30BLK (when the SMF30DEV field indicates VIO), is multiplied by this rate to obtain the cost.
A value of the form xx.xxxxx specifying the cost of a specific tape mount. The number of specific tape mounts contained in the SMF type 30 record field, SMF30TPR, is multiplied by this rate to obtain the cost.
A value of the form xx.xxxxx specifying the cost of a non-specific tape mount. The number of non-specific tape mounts contained in the SMF type 30 record field, SMF30PTM, is multiplied by this rate to obtain the cost.
A multiplier factor of the form xxx.xxxx that may be used to normalize processor speeds. When specified it is applied only to costs based on TCB and SRB CPU time usage to account for differences in processor speeds.
A value of the form xx.xxxxx specifying the cost of a Device Connect Time second. The number of Device Connect Time seconds in the SMF type 30 record field, SMF30TCN, is multiplied by this rate to obtain the cost.
This function allows you to specify the name of a program that is executed under TSO and disallow use of the program and writing up to 5 lines of explanation.
This would typically be used to force execution of certain programs only on the LPAR where they are licensed.
Space for thirty-two programs is provided.
Figure 163. TSO Program Intercept Entry Panel
|
Field entry is as follows:
Enter YES to activate this function, or NO to deactivate it.
There are 32 available slots to specify a program name and the associated messages.
To activate any of the 32 slots, tab to the appropriate slot and enter YES To deactivate a slot, enter NO.
For each slot activated, and program name and message must be entered. To enter or change a program name, simply tab to the program field and type the program name. To add or change the message, enter an S in the select column on and press enter. A popup window will open allowing 5 lines of text.
Note: Do not use apostrophes (') in your message text!
Description: The WTO Controls function allows OS/EM to monitor user specified DD names for specific messages. When found, the message is written to the system console to allow appropriate action by either the operator or an automated operations package.
The DD name to be monitored may be limited to specific job names and/or program names. To have this function active, you must specify at a minimum a message id to search for, a DD name to search, and the program name (from the exec card) which owns the DD name.
Figure 164. WTO Controls Entry Panel
|
The fields and their meanings for this entry panel are:
Enter YES to allow use of the WTO function. Entering no here turns the function off without disturbing any of the selection lists already completed.
There are 32 available groupings which control which messages are tracked. Besides specifying the selection criteria, for each group you may have an OS/EM message number appended to the front of your message (OS$DC1195), and specify any routing codes or description codes to be used on the generated WTO.
To activate any of the 32 groups, tab to the Active column and enter YES. Enter NO to deactivate the group.
Enter YES in this field to have OS/EM's message number OS$DC1195 appended to the front of your message. Enter NO to have your message written without any modifications.
There are 2 line commands available on this panel. These are used to enter information about route codes and description codes, or to specify information in a selection criteria list.
Figure 165. Selector Entry Panel
|
There are 6 selector types available, however two types (DESCCDE and ROUTECDE) are used to store information to be passed back to MVS on the WTO macro and not used to select messages to be processed.
Three selector types are required to activate this function: DDNAME, PGMNAME and MSGID.
Enter any description codes that should be added to the WTO macro. Multiple description codes may be entered separated by spaces.
For a list of acceptable values, see the IBM manual MVS Routing and Descriptor Codes.
Enter the DDNAME(s) that OS/EM will monitor. At least one DDNAME is required to have this function active.
Enter the JOBNAME to be monitored. Multiple jobnames separated by spaces may be entered.
Enter the MSGID that OS/EM will search for. At least one MSGID must be entered for this function to be active.
This MSGID may actually be any constant text string that always appears in the message. A column number may be entered to specify where OS/EM will begin to scan for the text string. If the Message Start field is not entered, OS/EM begins searching at column 1. Column 1 is defined as the first position after an ASA or machine control character or 3800 font selection character. A range may also be specified as xx:yy which indicates that the message must start in columns xx through and including column yy.
Note: Only one message may be specified on a line. If multiple messages are to be monitored simply enter multiple lines all using the selector type of MSGID.
Enter the PGMNAME(s) that OS/EM will monitor. At least one program name is required to have this function active. Multiple programs may be entered separated by spaces.
Enter any route codes to be passed to the WTO macro. Multiple route codes may be specified separated by spaces.
For a list of acceptable values, see the IBM manual MVS Routing and Descriptor Codes.
Quick Pool is a dynamic DASD pooling package for non-VSAM datasets and also enforces VSAM dataset placement as defined by the pooling rules.
|
Each of these paths is presented in the following sections:
The Control DASD allocation function consists of one panel (Figure 167). By specifying either a YES or NO, you control the following options:
Figure 167. Control DASD Allocation
|
The DASD Allocation Control function may be used to allow or disallow various allocation parameters.
Field entry is as follows:
Warn mode establishes the action of the QuickPool function. It is more fully explained in "WARN mode".
Specifies whether DASD and QuickPool controls are enforced.
When operating in WARN mode, WTO messages are generated stating the action which would have occurred if the controls were active.
Enter YES to enable this option; NO to disable this option.
Specifies whether ABSTR (ABSOLUTE TRACK) allocation will be allowed or disallowed.
Enter YES to enable this option; NO to disable this option.
It is extremely unlikely that you would ever want to allow this function for DASD allocation. Old third-party software might be the only reason for this function.
Specifies whether CONTIG (contiguous space) allocation will be allowed or disallowed.
Enter YES to enable this option; NO to disable this option.
Allowing contiguous space allocation can result in failed allocations if the volume is badly fragmented.
Specifies whether ALX allocation will be allowed or disallowed.
Enter YES to enable this option; NO to disable this option.
This option is presented as last entered by you.
Specifies whether MXIG allocation will be allowed or disallowed.
Enter YES to enable this option; NO to disable this option.
Specifies whether single-level dataset names will be allowed or disallowed.
Enter YES to enable this option; NO to disable this option.
As a general rule, you should not allow single-level dataset names. Such datasets will be cataloged in the master catalog. The master catalog should contain only SYSRES volume datasets (usually SYS1 datasets) and ALIAS pointers to user catalogs.
Specifies whether ISAM dataset names will be allowed or disallowed.
Enter YES to enable this option; NO to disable this option.
As a general rule, you should not allow ISAM datasets. Most products, such as CICS, no longer directly support this access method. Use the IIP (ISAM Interface Program) and convert such files to VSAM - time is running out.
Specifies whether unmovable datasets will be allowed or disallowed.
Enter YES to enable this option; NO to disable this option.
Unmovable datasets are rare - most database software uses some sort of offset relative to the beginning of the dataset to find specific records. However, such files do exist. Be careful before enabling this option.
Specifies whether requests for datasets with the ADSP (Automatic Dataset Protection) attribute will be allowed or disallowed.
Enter YES to enable this option; NO to disable this option; RES to turn off the ADSP for datasets allocated with this attribute.
Proper dataset protection via your security system should make use of this attribute superfluous. That is, datasets should be defined as generic, even if only a single dataset exists under the dataset name.
Specifies whether discrete RACF profiles will be allowed or disallowed.
Enter YES to enable this option; NO to disable this option; RES to delete the discrete attribute.
Proper dataset protection via your security system should make use of this attribute superfluous. That is, datasets should be defined as generic, even if only a single dataset exists under the dataset name.
WARN mode provides you with a way of observing your allocation rules without enforcing those rules. Each time a dataset allocation would violate a rule, the allocation proceeds and a WTO message is created. The WTO message will list the dataset that was allocated, and the dataset name group the dataset resolved to. With this information, you can determine why the allocation would have been disallowed if WARN mode had not been in effect.
WARN mode should be used before actually "turning on" the QuickPool function. Allocation rules can become complicated, and your installation probably would not want production jobs failing because a dataset could not be allocated. It is also quite likely that there are jobs that do not follow whatever allocation rules your installation currently has in place. And, since the QuickPool function really should be done without reference to specific volume serial number on DD statements, time will be required to alter all your installation's JCL.
The QuickPool function controls which datasets may, or may not, be allocated on which DASD volumes. It will also automatically place datasets on the correct volume if your jobs do not direct datasets to specific volumes (such directed allocations must still follow the rules you establish). You can create volume groups with certain performance objectives in mind and ensure that proper datasets are placed on these volumes. For example, some of your volumes may deliver better access times because of your hardware configuration. These volumes would be likely candidates for your online files where quick access can be critical. Or, you can create volume groups that will ensure that datasets with simultaneous, heavy access are properly separated. The effectiveness of the QuickPool function is determined by the volume and dataset name groups you have built.
Figure 168. QuickPool Functions
|
The QuickPool entry panel allows you to turn on or off QuickPool, specify that QuickPool will operate in Control mode, and create/update pools.
Field entry is as follows:
QuickPool is enabled by entering a YES (or Y), or disabled by entering a NO (or N).
This option establishes the QuickPool span of control. If set to YES, the QuickPool function controls all of your installation's DASD volumes, even if they have not been explicitly defined to the QuickPool function.
Enter NO if the QuickPool function will control only those volumes specifically defined by the POOL list.
If you specify CONTROL, volumes not explicitly defined by the POOL list will be controlled by the allocation rules you establish with the global ALLOW and DISALLOW lists. If you do not create such lists, volumes not resolvable to Pool lists will not have any datasets, other than SYS1 datasets, allocated to them.
The bottom portion of the QuickPool Functions panel is a scrollable list of the pools defined. There are two special pool names which control global allocations. The first is GLOBALA where you may specify datasets which are always allowed on QuickPool volumes; and second GLOBALD where you specify datasets which are always disallowed. These two pool names may not be deleted or renamed. If they are not to be used, simply disable them by entering NO in the enable column.
Field entry is as follows:
There are four line commands available. They are:
The Pool list establishes an association between volumes and datasets. Each volume group may have 'either' an ALLOW list 'or' a DISALLOW list (these are subordinates to the global lists). If you do not define these lists, the volume group is strictly under the control of the global lists.
The ALLOW lists specifies which datasets are allowed on volumes within the group. The DISALLOW list specifies which datasets are not allowed on volumes within the group.
The type of list you are creating for the volume group, ALLOW or DISALLOW, is specified during the ADD a new group process. Once you have chosen the type of dataset name group list to be associated with the volume group, it cannot be changed. The type of list associated with the volume group is displayed as part of the volume group POOL list display.
The Volume group must have been previously defined (see "Define Volume Groups")
ALLOW defines a global list of dataset name groups which apply to all your DASD volumes. Groups defined via this option are always allowed on a volume regardless of the volume's own specific ALLOW or DISALLOW lists. The current status of this option is always indicated.
As with any OS/EM list, if you specify NO, you are deleting the list; thus, NO has no meaning in an initialization member. It only has an effect when used via the OS/EM online function.
DISALLOW defines a global list of dataset name groups. Such groups are not allowed on a volume (SYS1 datasets cannot be excluded from initial allocation via this option).
The global ALLOW list (which has a pool name of GLOBALA) specifies datasets which are always allowed on volumes controlled by the QuickPool function. This list takes precedence over the global DISALLOW list and any private DISALLOW lists specified for a volume group.
SYS1 datasets are always initially allowed on any volume.
To update the DSN groups in the Global ALLOW pool, enter C in the CMD field. A list of DSN groups will be displayed (see Figure 169).
The global DISALLOW list (which has a pool name of GLOBALD) specifies datasets which are always disallowed on volumes controlled by the QuickPool function. This list takes precedence over any private ALLOW lists specified for a volume group.
To update the DSN groups in the Global DISALLOW pool, enter C in the CMD field. A list of DSN groups will be displayed (see Figure 169).
Figure 169. QuickPool Add/Delete "POPUP" screen
|
Field entry is as follows:
To add entries, enter an A in the CMD field and overtype the Dataset Name Group on that line (overtyping will not alter the old entry).
The Dataset Name Group name must already be defined (see "Define Dataset Name Groups") for the add to be successful. Duplicates will be rejected. Group names are kept in alphabetical order, and this is the search mode OS/EM uses in trying to resolve volume/dataset associations.
To delete entries, enter a D in the CMD field.
The DSN group must have been previously defined (see "Define Dataset Name Groups")
If an ALLOW list is created, one of two actions will occur when a dataset is resolved to the volume group. In the case of a non-directed dataset allocation, each dataset within the listed dataset name groups will be allocated on a volume within the volume group. In the case of a directed dataset allocation, dataset allocation on a volume within the volume group will only be allowed if the dataset is resolved to one of the listed dataset name groups.
If a DISALLOW list is created, no dataset within a dataset name group will be allocated, or allowed to be allocated, on any volume within the volume group.
If a volume group is created without an ALLOW or DISALLOW list, the group will be controlled by the global ALLOW or DISALLOW lists. If you do not create global ALLOW or DISALLOW lists, such volumes will not be eligible for allocation.
Note: SYS1 datasets are always initially allowed on any volume. However, you cannot rename a dataset to SYS1 unless you have specifically allowed SYS1 datasets on the volume.
OS/EM follows an explicit hierarchy when searching the ALLOW and DISALLOW lists for a dataset name match.
Dataset name lists established with the ALLOW and DISALLOW options are global and apply to all controlled and uncontrolled volumes. Any dataset name matches resolved to either of these two lists stops the search and the resulting allocation rule will be the one used. Any matches that might apply to a specific volume group will be ignored. For example, if a dataset name resolves to the global ALLOW list, and also to a specific volume group DISALLOW list, the allocation will be permitted. The reverse is also possible: a dataset allocation will not be permitted if it is in the global DISALLOW list and also in a specific volume group ALLOW list.
If no matches are found in the global ALLOW and DISALLOW lists, or you have not specified any global ALLOW or DISALLOW lists, the volume group ALLOW and DISALLOW lists are searched. The first match within any of these lists is the allocation rule used.
Further, within the global lists, ALLOW takes precedence over DISALLOW. That is, if a dataset name can be resolved to both an ALLOW list and a DISALLOW list (because of a dataset name specification), the dataset allocation will be allowed.
You might think it "normal" to place a given volume in only one group. However, consider the follow situation:
The QuickPool function covers this situation by allowing the volume to be defined in two, or more, volume groups. Specific datasets can be allowed to the volume, but the volume will also be part of a volume pool which is used for the more general case.
Also consider the HSM Optimizer defragmentation process. You may wish to place a volume in one group that is "defrag'd" on a weekly basis. You might also want to place the volume in a group that has "emergency" defrag criteria.
The RACF Controls Menu provides access to the Discrete Profiles and External Tapes functions.
Figure 170. RACF Controls Menu
|
Each of these functions is presented in the following sections:
The RACF Discrete Profile provides the option to control who can create RACF discrete profiles. With DFSMShsm System Managed Storage, RACF discrete profiles are incompatible with dynamic storage groups because of the restriction the discrete profile carries, a specific volume serial.
Note: You must define the classes to be controlled to your security manager. Use the general resource class 'FACILITY' for RACF and ACF-2 or 'IBMFAC' for CA-Topsecret and a resource name of DISCRETE.PROFILE.name where 'name' matches the class name your are protecting. For class DATASET the resource name or profile would be DISCRETE.PROFILE.DATASET. Read authority is required to allow creation of the discrete profile.
Figure 171. RACF Discrete Profiles Entry Panel
|
Field entry is as follows:
Enter YES to activate the control of discrete profile creation. Enter NO to deactivate this control.
Enter YES to enable warning messages to be sent to the person trying to create the discrete profile instead of failing the request.
Enter NORMAL to enable RACF standard logging, or NONE to disable RACF logging.
The bottom portion of the panel contains a scrollable area. There are two line commands available:
To add entries, enter an A in the CMD field and overtype the Class and Type Check fields. (overtyping will not alter the old entry). The Class entry is for the RACF class to be protected. The Type Check field may contain either WARN or FAIL
To delete entries, enter a D in the CMD field.
The External Tape function allows a user to read any tape dataset when the following criteria is met, thus bypassing the RACF PROTECTALL(FAIL) option:
Figure 172. RACF External Tape Entry Panel
|
Field entry is as follows:
Enter YES to activate this function, or NO to deactivate it.
Enter NONE to turn off RACF logging, or NORMAL to turn logging on.
Restrict Devices provides the option of reserving devices for critical jobs that must complete without waiting for devices to become available. The device is reserved and only the Jobnames that are specified with this option will be able to use the device. Even Operator VARY device commands will not make a restricted device available.
|
Field entry is as follows:
To add entries, enter an A in the CMD field and overtype the Device Range, both entries are required. (Overtyping will not alter the old entry.)
To delete entries, enter a D in the CMD field.
Figure 174. Restrict Device With Jobname "POPUP" Screen
|
Field entry is as follows:
To add entries, enter an A in the CMD field and overtype the Jobname or Jobname mask (overtyping will not alter the old entry).
To delete entries, enter a D in the CMD field.
The following table shows the allowable mask characters:
Qualifier | Description |
? | The question mark is used to unconditionally match any single character (except periods) where the question mark occurs in the specification. Multiples are allowed. |
1 | The ampersand is used to unconditionally match any single alpha character where the ampersand occurs in the specification. Multiples are allowed. |
% | The percent sign is used to unconditionally match any single numeric character where the percent sign occurs in the specification. Multiples are allowed. |
- | The dash is used to unconditionally match any preceding or succeeding character(s). Multiples are allowed. |
The SVC DELETE/REPLACE function allows you to delete an SVC so it cannot be executed, or optionally replace it with your own program.
Upon entry to this function you are presented with a scrollable list of the SVCs that have previously been entered, or a blank entry.
Figure 175. SVC Delete/Replace Controls
|
The fields and their meaning are:
Three line commands are available:
A - Add new entry. You can use the A command on any line. When you press enter, a blank data entry panel will be displayed (see Figure 176) to allow you to enter the required information.
D - Delete an existing entry.
S - Select for update. When selected, a data entry panel (see Figure 176) will be displayed to allow information about the SVC to be updated.
The number of the SVC being deleted/overridden.
The function being performed, either DELETE or REPLACE.
The program name which will be loaded in place of the original SVC.
The library name the SVC replacement will be loaded from.
The data entry panel contains several required fields depending on the type of SVC being replaced.
Figure 176. SVC Delete/Reload Entry panel
|
The fields and their meaning are:
If you are adding a new entry, this field is unprotected and you must enter the number of the SVC you are deleting or replacing. If you are updating an existing entry, this field is protected.
Enter the function you wish to perform. If it is delete, no other field should be entered.
Enter the name of the program which will replace the existing SVC.
You must specify the type of SVC you are replacing. Specify 1, 2, 3, 4 or 6.
Specify the type of lock your program needs. If the SVC type is '1', the LOCAL lock is not allowed. If the type is '6', neither type of lock is allowed. If the type is '2, 3 or 4' you MUST specify LOCAL if CMS is specified.
If only APF authorized programs should be allowed to execute this SVC, enter YES.
Enter YES if the SVC replacement should be accessed in Access Register mode.
Enter YES to allow the system to preempt your program to handle I/O.
Enter YES to have the User CVT field cleared before the SVC is executed.
Enter the name of the library where the SVC replacement program resides. If this field is left blank, the standard search routines are used to locate the load module. The library name should be enclosed in single quotes (').
The Tape Share option allows you to define tape drives to OS/EM which will then control the devices by automatically issuing the VARY commands needed to put the drive offline on one system and online on the system where it is needed. No operator intervention is required.
Note: Since Tape Share controls wheather a device is online or offline, we suggest that you configure all drives defined to Tape Share to be offline at IPL time.
This optional feature of OS/EM requires a started task to be running on each system sharing tape devices. A sample of the procedure to execute the started task may be found in the OS/EM SAMPLIB in member name OS$TPSHR. The name of the started task must remain OS$TPSHR as OS/EM will issue a start command for this name at IPL time.
Figure 177. Tape Share Controls Menu
|
This menu contains entries for both system level and device level control information. Both entries must be selected to initially setup Tape Share Controls.
Figure 178. System Level Controls Panel
|
Use this panel to enter non-specific information about Tape Share Controls. The information on this panel is required before any controls dealing with specific devices become effective.
Field entry is as follows:
Enter YES to turn on Tape Share, or NO to disable Tape Share on the current system.
Specify the action to be taken when Tape Share Controls are deactivated. Enter one of the following:
This will cause OS/EM to issue a VARY OFFLINE,GLOBAL command for all active devices in the tape share pool on every system which has Tape Share Control's active.
This will cause OS/EM to wait until every device in the pool becomes free before disabling Tape Share Controls on the current system.
This will cause OS/EM to immediately remove the devices from the pool on all systems.
Specify the action to be taken if resources are unavailable and the job is placed into a wait state. Specify one of the following:
Retain all resources currently allocated.
Any resources currently allocated when the job is placed into a wait may be released and allocated to another task.
Cause the devices selected on the Device Level Controls panel to be taken offline at IPL time.
The OFFLINE command will only affect the system the command is executed on. Other systems will not be affected.
The OFFLINE command will affect all systems sharing the specified devices.
Cause the devices selected on the Device Level Controls panel to be marked available at IPL time.
The ONLINE command will only affect the system the command is executed on. Other systems will not be affected.
The ONLINE command will affect all systems sharing the specified devices.
Note: Does not issue a VARY command to bring the device(s) online.
Specify the dataset name which will be used as the communications dataset. This file must be on shared DASD available to all systems sharing tape drives. Enter the dataset name using standard TSO naming conventions, i.e. if apostrophes do not enclose the dataset name, your TSO ID will be appended to the beginning of the entered name.
Note: The dataset must have the following attributes:
RECFM=F,LRECL=29080,DSORG=PS
One track will be sufficent space.
Enter either YES or NO depending upon your need to specify system priorities. If you specify YES here, you must also enter the system names in the scrollable portion of the panel.
If you have previously specified system names to set priorities, you do not need to remove them when changing this response to NO.
The bottom portion of this panel contains a scroll area. This is used to store the system names/IDs in priority order. Initially this list is empty and you must use the Insert primary command to enter the first entry.
There are two line commands available:
Use this line command to delete an entry no longer needed.
This will cause a popup window to open for entry of the system ID and an optional description of the system. After the fields are completed, press the enter key to have the new system added after the line where the insert line command was entered.
Figure 179. Device Level Controls Panel
|
Use this panel to enter information about specific tape device addresses. The system level controls must be completed for this information to be used.
Field entry is as follows:
After the device addresses are in the scrollable list, you may assign alias entries, assign a preference order of use, and the mode the device will be placed in at IPL time. To change any of these entries, simply overtype the field to be added or changed and press the enter key.
Following is an explanation of each of these optional items.
Alias | Used if all systems do not refer to a device with the same address. In this case, a global name is assigned that all systems will use, and the machine which has a different address will use the ALIAS keyword to bind the local address to the global address. |
Preference Order | The order in which the drives will be assigned. Lowest number to highest number. It is suggested that you enter numbers at least 10 apart to allow other drives to be added into the order at a later date. |
IPL Mode | Enter either ONLINE or OFFLINE to specify the mode the drives will be placed in at IPL time. If specified, the global or local option entered on the systems controls menu will be used for that address. |
Line Commands | Use the Delete command to remove devices from Tape Share's control. The Delete command will wait for the device(s) to become unallocated. Use the Force Delete command to remove the device from Tape Share's control immediately. Tape Share will not wait for the device to become unallocated. |
Time Controls provides the option of enforcing CPU time limitations, extending CPU time, JOB wait time, and TSO wait time as well as controlling by job class the insertion of a missing time parameter, overriding the time specified on the jobcard, or canceling the job if job time is greater than the JES2 time value.
Figure 180. Time Controls Menu
|
Each of these paths is presented in the following sections:
The JOB TIME function allows you to control by job class the insertion of a missing time parameter, override the time specified on the jobcard or optionally cancel the job for incorrect time values.
You also have the option of cancelling all jobs coming into the system which do not have a time specified. This option takes precedence over the Insert Missing by job class function.
For each job class selected the following processing occurs:
|
The fields and their meaning for this input panel are:
This field allows you to turn off Job Time Controls without having to disable each function type individually.
When this option is selected, any job trying to increase it's allowable job time will be flushed instead of having it's job time simply reset.
This option insures that every job coming into the system contains a time parameter, regardless of the class the job is submitted in. When this option is active, if the time parameter is missing, the job is flushed. This option takes precedence over the Insert Missing by class option.
The above example shows that all classes will have their job time set if the time parameter is missing on the jobcard; Classes A,B,C,D and E will reset the time if the jobcard has TIME=MAXIMUM coded; classes N, O and P will be reset if the job has TIME=NOLIMIT or TIME=1440; classes 4, 5, and 6 will have their time reset if the time parameter is greater than that specified in JES2; class X will be reset only if the job's time parameter is set lower than that specified in JES2.
The '#' symbol represents a class available for use.
Two line commands are available:
S | Select for update; use this to change the Active indicator from YES to NO. |
C | Class set/modify; when you use this line command, a popup window will open (see Figure 182) allowing you to specify or change the classes for the selected function. On the popup window, you may specify the classes in a range, i.e. A:D will activate the function for classes A, B, C and D. |
Figure 182. Job Time Controls Pop-up window.
|
OS/EM provides support to extend execution time at both the job and step level. It also allows you to extend wait time for batch jobs, TSO users and/or terminals.
For both step and job CPU time, you may specify individual job classes or all classes to be given the default time extension. You can specify time extension by class which is different than the default time. You may also request OS/EM issue a WTO every time an extension is given.
To ensure that a job is not overlooked while extensions are being given, a WTOR may be issued every 1 to 99 times an extension is granted.
Wait time extensions may be granted by job class, and for TSO activity, by user ID, terminal ID and active hours by day of week.
If both job and step time has been exceeded at the same time, it is unpredictable which indicator will be presented to OS/EM first. Because of this, it is suggested that you setup extensions for both job and step CPU times.
Screen Figure 183 shows where RACF information, default time information and selection weights are entered.
Figure 183. System Level Controls
|
The fields and their meanings are as follows:
Enter YES or NO to activate or inactivate extension controls.
There are 5 weight classes:
Enter a number from 1 to 9 for each selection criteria type. The higher the number the more weight that is given to that selection type. In other words, if a job that is running matches multiple extension groups, the group with the highest weight will take effect.
The defaults block is where you can specify extension time that will take effect if no extension group matches a given job or userid.
Specify the number of seconds a job and/or step should be given and the number of minutes of wait time a job/user/terminal may receive.
To have a WTO issued each time an extension is granted, enter YES here.
To have a WTOR issued to allow the operator to either cancel a job or allow another extension, enter the number of extensions that may be granted before a WTOR is issued. Enter a number from 1 to 99.
Note: While OS/EM is waiting for an operator response the job will continue to execute. This was a design decision by IBM.
The RACF block is where you can specify extension time that will take effect for any job/user/terminal which matches an entry in the specified RACF resource. This block takes precedence over the normal defaults or an extension group.
The Time, WTO and WTOR fields are identical to the defaults block. The RACF block has two additional fields:
Enter either NORMAL or NONE to control RACF logging.
Enter the name of the resource you have assigned to control Time Extensions. This resource must be defined in the general resource FACILITY class for RACF and CA-ACF2 or the IBMFAC class for CA-Topsecret. Users to be given extensions must have READ access to this profile.
The Time Extension Controls panel lists the 32 available extension groups. Each group will have at least one list of job classes, job names, program names, terminal IDs, or time of day to control which extension is granted to whom.
|
Extension 1 shown in Figure 184 is active. A job level time extension of 45 seconds will be granted. Each extension will have a message written to the operator. After 5 extensions, a WTOR will be issued to allow the operator to cancel the job if necessary. A step level extension of 15 seconds will also be given. Again a WTO will be issued at each extension and a WTOR will be issued after 2 extensions. Wait time will be extended by 30 minutes, and a WTO will not be issued when the extension is granted.
There are 5 active selection lists for Extension 1:
There are two line commands available: S to add/update selector entries and D to remove all selector enteries and clear the selection list of time and WTO values.
When the line command is processed, another panel Figure 185 is displayed where you may select any of the lists for update as well as change the list type between include and exclude. The Days list is always an include list.
Figure 185. Selector Entry Panel
|
The selector entry panel allows you to enter the criteria that will be used to select a job that will have its time adjusted.
There are five types of selectors:
JOBCLASS | Enter the jobclasses that the job being evaluated must match if this is an include type list, or the classes which the job must not match for an exclude list. The jobclasses may be entered individually separated by spaces or as a range where the beginning class and ending class is separated by a colon (:). |
JOBNAME | Enter any jobnames either as complete names or as a jobname mask. Separate the names or masks with a space. Enter as many names/masks as will fit on the line. If additional names need to be entered simply insert a blank line and enter the selector type as JOBNAME and continue entering names or masks. |
PGMNAME | Enter any program names either as complete names or as a program name mask. Separate the names or masks with a space. Enter as many names/masks as will fit on the line. If additional names need to be entered simply insert a blank line and enter the selector type as PGMNAME and continue entering names or masks. |
TERMINAL | Enter any terminal IDs as the complete ID or as a mask. Separate the IDs or masks with a space. Enter as many IDs/masks as will fit on the line. If additional IDs need to be entered simply insert a blank line and enter the selector type as TERMINAL and continue entering IDs and or masks. |
Day of Week | Use the day of week you need to specify a time range as the selector type then in the Selector Names/Mask List field enter the beginning and ending times separated by a colon (:) using the 24 hour time format. Only one time range is allowed per day. If you want the control to be active on Monday between 8AM and 5PM enter the selector type as MONDAY then enter 0800:1700 in the list field. |
The QUERY function displays the state of the OS/EM environment.
The ALL and ACTIVE options are mutually exclusive. If you select no options, ACTIVE is assumed.
The output of the QUERY command is to a dataset that will be browsed by the ISPF BROWSE function. When you have finished viewing the output, the END key will exit the browse function.
The SMF, TSO, DASD, ALLOC, RACF, JES2, JES3, HSM and ISPF options can be limited to a specific exit point by entering its name in the supplied field. If the exit does not exist, an error will be returned.
To limit the display of JES2 exit information, you may enter the name of the JES2 subsystem to be displayed in the JESNAME field. If left blank, all JES2 subsystems will be displayed.
The POOL display may be limited to Data Set Name Groups (DSN), Volume Name Groups (VOL), or QuickPool definitions (POOL).
Figure 186. OS$CNTL Query Command
|
To select a QUERY option, from specific to ALL, enter an S before the QUERY option.
When the query has completed you will be placed into BROWSE to view the results. Below is a sample of the first page of generated output for the SMF IEFACTRT exit.
Figure 187. Sample Query Output
|
Use of this screen will result in the generation of RELOAD commands for the optional OS/EM control functions, and for the various OS/EM interface modules.
You would not normally reload these modules unless you had applied maintenance to the OS/EM system. Instructions with the OS/EM maintenance tape will indicate if any of these modules have to be reloaded.
Figure 188. Module Type Reload Selection
|
Each of these paths is presented in the following sections:
The JES2 Reload Selection panel displays all of the active JES2 exits. Scroll through this list to find the exit(s) which need to be reloaded.
The list of user exits displayed is controlled by the active JES2 subsystem. Please use Option 6 - Set JES name to select the active JES2 subsystem name. (See "Set JES2 Name")
Figure 189. JES2 Reload Selection
|
The scrollable list presented shows all the primary and backup user exits which have been defined using the ISPF Interface. Select any exit which needs to be reloaded by placing an S before the exit name. You may optionally specify or change the load library where the exit to be reloaded resides. The library name should be enclosed in single quotes (').
After pressing the enter key to register your selections, the exits are removed from the display.
If an exit has been selected in error, enter the cancel command to exit the selection list without reloading the exits.
The reload commands are generated and executed when you exit the selection list.
Note: If you need to reload an exit which has not been defined to the ISPF Interface, you will need to define it before it will be displayed in this list. This includes exits which OS/EM finds and loads at IPL time. You may wish to use the REBUILD function (see "Rebuild OS/EM Tables") which will find and add all currently loaded user exits to the interface.
The JES3 Reload Selection panel displays all of the active JES3 exits. Scroll through this list to find the exit(s) which need to be reloaded.
Figure 190. JES3 Reload Selection
|
When an exit is found, enter an S in front of the module name and press enter. You may change or add the library name. The library name should be enclosed in single quotes (').
Changes made here are not saved. To make any needed changes permanent, make the changes through the "Basic Exit Functions".
If an exit has been selected in error, enter the cancel command.
The reload commands are generated and executed when you exit the selection list.
Note: If you need to reload an exit which has not been defined to the ISPF Interface, you will need to define it before it will be displayed in this list. This includes exits which OS/EM finds and loads at IPL time. You may wish to use the REBUILD function (see "Rebuild OS/EM Tables") which will find and add all currently loaded user exits to the interface.
The MVS Reload Selection panel displays all of the active MVS exits. Scroll through this list to find the exit(s) which need to be reloaded.
Figure 191. MVS Exit Reload Selection
|
When an exit is found, place an S in front of the module name and press ENTER. You may add or change the load library where the module to be reloaded resides. The library name should be enclosed in single quotes (').
Changes made here are not saved. To make any needed changes permanent, make the changes through the "Basic Exit Functions".
If an exit has been selected in error, enter the cancel command.
The reload commands are generated and executed when you exit the selection list.
Note: If you need to reload an exit which has not been defined to the ISPF Interface, you will need to define it before it will be displayed in this list. This includes exits which OS/EM finds and loads at IPL time. You may wish to use the REBUILD function (see "Rebuild OS/EM Tables") which will find and add all currently loaded user exits to the interface.
The System Reload Selection panel displays all of the active System exits. Scroll through this list to find the exit(s) which need to be reloaded.
Figure 192. System Reload Selection
|
When an exit is found, enter a S in front of the module name and press ENTER.
If an exit has been selected in error, enter the cancel command which will cancel all system reload processing. Reselect the OS/EM System Modules to respecify the correct system modules to be reloaded.
The RACF Tables Reload Selection panel displays the three RACF Tables available to be reloaded.
Figure 193. RACF Table Reload Selection
|
To cause a table to be reloaded, place an S in front of the module name. If you are replacing the table with another module, enter the module name in the field provided. If you just want to reset the module, leave the replacement module name blank and enter YES in the reset field.
An optional library name may also be specified. This library must be APF authorizied. The library name should be enclosed in single quotes (').
OS/EM requires the name of the JES2 subsystem it is to control. Since you may have several JES2 subsystems, use this function to specify those names.
If you plan to use the Password function of JES Exit2, you need to specify a free User PCE field to ensure that OS/EM does not overwrite data that one of your in-house exits might be using.
Figure 194. JES2 Subsystem Names
|
This panel shows:
There are two line commands available on this panel:
S | Use S to select the active subsystem. All OS/EM entries processed through the interface which affect JES2 will be processed against this subsystem name. |
D | Use D to delete an incorrect or obsolete name. |
The Version, Primary JES, PCE, AutoInstall and Subsystem Description fields may be changed by simply overtyping whatever is there. The PCE field will only accept a value of 0 or 1.
The one primary command available on this panel is A which is used to add a new subsystem. When entered on the command line and the enter key is pressed the following panel will be displayed.
Figure 195. Add JES2 Subsystem
|
All fields with the exception of the description field are required.
As you make changes to the OS/EM parameters through the ISPF interface, a record is kept of the parameters changed.
When you select Pending Changes from the OS/EM Primary Option Menu, you are presented with a scrollable list of the changes you have made. The items are presented in reverse date/time order so that the most current changes are displayed first.
This information is dropped either when you use the Pending Changes Maintenance function (see "Pending Changes Table Maintenance") or the Rebuild function (see "Rebuild OS/EM Tables".)
Figure 196. Execute Pending Changes - Review/Execute
|
Field entry is as follows:
Four line commands are available, B for Browse changes, E for Execute online, S to process the changes in batch and R for Reset.
When you enter a B, you are presented with detail information about the changes made (see Figure 197).
Entering an E, marks the Control Type for execution at the end of your current Pending Changes session.
Entering an S, marks the Control Type for execution in batch at the end of your current Pending Changes session.
If you decide that you have marked a Control Type in error, enter CAN or CANCEL on the command line to cancel all pending executions.
Use the R or RESET line command if you need to remove the EXE flag to allow the changes to be re-executed.
Control Type gives you the name of the function changed, i.e., if you made a change to your dataset name groups, the Control Type would be displayed as DATASET NAME GROUPS MAINTENANCE.
The EXE field will show INIT if the last action was the building of the initialization member, EXE if the last action was to execute the changes or either ONL or SUB if you have marked the item to be executed at the end of your pending changes session.
The Last Updated field displays the TSO ID of the person making the change, the time the change was made and the date.
The Last Action field gives you an idea of what the last change was.
Figure 197. Pending Changes - Detail
|
If you have selected changes to be executed via a batch job you will be presented with the following panel when exiting.
Figure 198. Execute Pending Changes In Batch
|
This is a scrollable entry panel where you may enter up to 32 systems where batch jobs will be routed for execution.
This can be useful if you know that the changes you have made are required on several systems.
To add system IDs or node names simply enter the name in the field provided and optionally enter a description to identify it.
Note: You may enter either the system ID for SYSAFF or an XEQ node name but not both. If you have OS/EM job routing active the XEQ node may be a resource name available on the system where the job should execute. Any entry that has a non-blank SYSAFF system ID will have JCL generated with a jobparm:
/*JOBPARM SYSAFF=sysaff
While an entry that has a non-blank XEQ node will have JCL generated with a route card:
/*ROUTE XEQ node
WARNING:OS/EM job routing has the ability to change any SYSAFF statements to SYSAFF=ALL so if using OS/EM job routing it is suggested you use the XEQ Node field and enter a valid resource name.
To select the systems where a batch job will be routed enter an S in the S column and press. Alternatively enter YES in the 'select ALL IDs' field to have JCL generated for jobs on all the systems listed.
When executing changes in batch a special member is created in the OS/EM PARMLIB with the name SUBPEND. This member contains OS$CNTL commands for all of the changes selected. This PARMLIB must be available on each system selected.
Once each system is selected pressing the END key will present you with a job card skeleton which you must update with any information required by your shop, i.e. account number information.
Figure 199. Pending Changes Job Card
|
Pressing PF3 or the END key will cause all needed jobs to be submitted.
Depending on the number of specifications you have entered, the process to Build Initialization Members may take some time; therefore, a panel is presented letting you know the process has been started (see Figure 201). You will be told if the build has been successful or not.
Figure 200. Build Initialization Members
|
The output of the build is placed in the OS/EM PARMLIB whose name is constructed from the site variables which were defined in the OS$START command during OS/EM installation. This dataset must be available to OS/EM during its startup process as the started task OSV6 uses it as input.
Member names are indicative of the function: for example, HSMINIT is the initialization member for DFHSM support.
Figure 201. Build Initialization Members - Status
|
The following is a list of SMF, TSO, ISPF, JES2, JES3, RACF, Allocation and DFP exits that OS/EM currently supports. The standard support manages the loading and execution of up to three user exits, and optionally an OS/EM exit that provides the Extended support. The listed usage may not cover all the conditions the exit can handle; it is only suggestive of the common use.
IEFALLOD | Allocated/Offline Device Exit |
IEFALLSW | Specific Waits Exit |
IEFALLVE | Volume Enqueue Exit |
IEFALLVM | Volume Mount Exit |
IEFDB401 | Allocation Input Validation Exit (SVC99) |
IGGPRE00 | DADSM Pre-processing for Allocate, Extend, Scratch, Partial Release and Rename |
IGGPOST0 | DADSM Post-processing for Allocate, Extend, Scratch, Partial Release and Rename |
ARCADEXT | Data Set Deletion Exit |
ARCBDEXT | Data Set Backup Exit |
ARCBEEXT | ABARS Backup Error Exit |
ARCCBEXT | Control Data Set Backup Exit |
ARCCDEXT | Data Set Reblock Exit |
ARCCREXT | ABARS Conflict Resolution Exit |
ARCEDEXT | ABARS Expiration Date Exit |
ARCINEXT | Initialization Exit |
ARCMDEXT | Data Set Migration Exit |
ARCMMEXT | Second-Level Migration Data Set Exit |
ARCMVEXT | Space Management Volume Exit |
ARCM2EXT | ABARS Migration Level 2 Data Set Exit |
ARCRDEXT | Recall Exit (Not valid for SMS Managed Data Sets) |
ARCRPEXT | Recall/Recover Priority Exit |
ARCSAEXT | Space Management and Backup Exit |
ARCSDEXT | Shutdown Exit |
ARCSKEXT | ABARS Data Set Skip Exit |
ARCTDEXT | Tape Data Set Exit |
ARCTEEXT | Tape Ejected Exit |
ARCTVEXT | Tape Volume Exit |
Exit 1 | ISPF initialization |
Exit 2 | ISPF termination |
Exit 3 | SELECT service start |
Exit 4 | SELECT service end |
Exit 5 | TSO command start |
Exit 6 | TSO command end |
Exit 7 | LIBDEF service |
Exit 8 | RESERVE |
Exit 9 | RELEASE |
Exit 10 | Logical screen start |
Exit 11 | Logical screen end |
Exit 12 | ISPF/PDF service start |
Exit 13 | ISPF/PDF service end |
Exit 14 | SWAP logical screens |
Exit 15 | DISPLAY service start |
Exit 16 | Log, list, and temporary data set allocation |
Exit 0 | Pre-initialization |
Exit 1 | Print/Punch Separators |
Exit 2 | Job Statement Scan |
Exit 3 | Job Statement Accounting Field Scan |
Exit 4 | JCL and JES2 Control Statement Scan |
Exit 5 | JES2 Command Preprocessor |
Exit 6 | Converter/Interpreter Text Scan |
Exit 7 | JCT Read/Write (JES2) |
Exit 8 | Control Block Read/Write (User) |
Exit 9 | Job Output Overflow |
Exit 10 | $WTO Screen |
Exit 11 | Spool Partitioning Allocation ($TRACK) |
Exit 12 | Spool Partitioning Allocation ($STRAK) |
Exit 13 | TSO/E Interactive Data Transmission Facility Screening and Notification |
Exit 14 | Job Queue Work Select - $QGET |
Exit 15 | Output Data Set/Copy Select |
Exit 16 | Notify |
Exit 17 | BSC RJE SIGNON/SIGNOFF |
Exit 18 | SNA RJE SIGNON/SIGNOFF |
Exit 19 | Initialization Statement |
Exit 20 | End of Input |
Exit 21 | SMF Record |
Exit 22 | Cancel/Status |
Exit 23 | FSS Job Separator Page (JSPA) Processing |
Exit 24 | Post-initialization |
Exit 25 | JCT Read (FSS) |
Exit 26 | Termination/Resource Release |
Exit 27 | PCE Attach/Detach |
Exit 28 | Subsystem Interface (SSI) Job Termination |
Exit 29 | Subsystem Interface (SSI) End-of-Memory |
Exit 30 | Subsystem Interface (SSI) Data Set Open and RESTART |
Exit 31 | Subsystem Interface (SSI) Allocation |
Exit 32 | Subsystem Interface (SSI) Job Selection |
Exit 33 | Subsystem Interface (SSI) Data Set Close |
Exit 34 | Subsystem Interface (SSI) Data Set Unallocation |
Exit 35 | Subsystem Interface (SSI) End-of-Task |
Exit 36 | Pre-security Authorization Call |
Exit 37 | Post-security Authorization Call |
Exit 38 | TSO/E Receive Data Set Disposition |
Exit 39 | NJE SYSOUT Reception Data Set Disposition |
Exit 40 | Modifying SYSOUT Characteristics |
Exit 41 | Modifying Output Grouping Key Selection |
Exit 42 | Modifying a Notify User Message |
Exit 43 | Transaction Program Select/Terminate/Change |
Exit 44 | JES2 Converter Exit (Main Task) |
Exit 45 | Pre-SJF Exit Request |
Exit 46 | Transmitting a NJE Data Area |
Exit 47 | Receiving a NJE Data Area |
Exit 48 | Subsystem Interface (SSI) SYSOUT Data Set Unallocation |
Exit 49 | Job Queue Work Select |
IATUX01 | Reserved Name |
IATUX02 | Reserved Name |
IATUX03 | Examine of modify Converter/Interpreter Text created from JCL |
IATUX04 | Examine the Job Information from the JCL |
IATUX05 | Examine the Step Information from the JCL |
IATUX06 | Examine DD Statement Information from the JCL |
IATUX07 | Examine or Substitute Unit, Type and Volume Serial Information |
IATUX08 | Examine Setup Information |
IATUX09 | Examine Final Job Status, JST and JVT |
IATUX10 | Generate a Message |
IATUX11 | Inhibit Printing of the LOCATE Request or Response |
IATUX14 | Job Validation/Restart LOCATE Request or Response |
IATUX15 | Scan an Initialization Statement |
IATUX16 | Reserved Name |
IATUX17 | Define Set of Scheduler Elements |
IATUX18 | Check Input Authority Level for Consoles |
IATUX19 | Examine or Modify Data Temporary OSE |
IATUX20 | Examine or Modify Data Written on Job Header Pages |
IATUX21 | Create and Write Data Set Headers for Output Data Sets |
IATUX22 | Examine or Alter the Forms Alignment |
IATUX23 | Examine or Modify Data Written to Trailer Pages |
IATUX24 | Examine the Net-id and Devices Requested |
IATUX25 | Examine or Modify Volume Serial Number |
IATUX26 | Examine MVS Scheduler Control Blocks |
IATUX27 | Examine or Alter the JDAB, JCT and JMR |
IATUX28 | Examine the Accounting Information as Provided by the Job Statement |
IATUX29 | Examine the Accounting Information as Provided JCT, JDAB and JMR |
IATUX30 | Examine Authority Level for TSO/E Terminal Commands |
IATUX31 | Examine or Modify Destination or Message Text |
IATUX32 | Override the DYNALLDSN Initialization Statement |
IATUX33 | JES3 Control Statement and the JCL EXEC Statement Installation Exit |
IATUX34 | JCL DD Statement User Exit and the JCL EXEC Statement Installation Exit |
IATUX35 | Validity Check Network Commands |
IATUX36 | Collect Accounting Information |
IATUX37 | Modify the JES3 Networking Data Set Header |
IATUX38 | Change the SYSOUT Class for Networking Data Sets |
IATUX39 | Modify the Data Set Header for a SYSOUT Data Set |
IATUX40 | Modify Job Header |
IATUX41 | Determine the Disposition of Job Over JCL Limit |
IATUX42 | TSO/E Interactive Data Transmission Facility Screening and Notification |
IATUX43 | Modify Job Header Segments |
IATUX44 | Examine and Modify the JCL |
IATUX45 | Examine and Modify the Data Sent to an Output Writer FSS |
IATUX46 | Select Processors Eligible for Converter/Interpreter Processing |
IATUX47 | Reserved Name |
IATUX48 | Override Operator Modification of Output Data Sets |
IATUX49 | Override Address Selected for Converter/Interpreter Processing |
IATUX50 | Process User Defined BSIDMOD Codes for Converter/Interpreter |
IATUX56 | Authorize JES3 Commands Entered Through BDT |
IATUX57 | Select a Single WTO Routing Code for JES3 MGSROUTE |
IATUX58 | Modify Security Information Before JES3 Security Processing |
IATUX59 | Modify Security Information After JES3 Security Processing |
IATUX60 | Determine Action to take when a TSO/E User is Unable to Receive a Data Set |
IATUX61 | During MDS Processing, Choose Whether a Job Should be Cancelled or Sent to the Error Queue |
IATUX62 | Overrides the Decision to Accept a Tape or Disk Mount |
IATUX66 | Assigns Transmission Priority to a SNA/NJE Data Stream |
IATUX67 | Determines Action when Remote Data Set is Rejected by RACF |
IATUX68 | Modify Local NJE Job Trailers |
IATUX69 | Determine If a Message is to be Sent to the Jes3 Global Address Space |
IATUX70 | Perform Additional Message Processing |
IATUX71 | Modify a Tape Request Setup Message |
IATUX72 | Examine/Modify a Temporary OSE or an OSE Moved to Writer Queue |
IATUX73 - IATUX99 are provided for future compatibility allowing for the specification of the Linkage Types:
ICHCCX00 | RACF password |
ICHCNX00 | RACF password |
ICHDEX01 | RACF password encryption |
ICHPWX01 | New Password exit |
ICHRCX01 | RACROUTE REQUEST=AUTH Preprocessing |
ICHRCX02 | RACROUTE REQUEST=AUTH Postprocessing |
ICHRDX01 | RACROUTE REQUEST=DEFINE Preprocessing |
ICHRDX02 | RACROUTE REQUEST=DEFINE Postprocessing |
ICHRFX01 | RACROUTE REQUEST=FASTAUTH Preprocessing |
ICHRFX02 | RACROUTE REQUEST=FASTAUTH Postprocessing |
ICHRFX03 | RACROUTE REQUEST=FASTAUTH Preprocessing |
ICHRFX04 | RACROUTE REQUEST=FASTAUTH Postprocessing |
ICHRIX01 | RACROUTE REQUEST=VERIFY Preprocessing |
ICHRIX02 | RACROUTE REQUEST=VERIFY Postprocessing |
ICHRLX01 | RACROUTE REQUEST=LIST Pre/Postprocessing |
ICHRLX02 | RACROUTE REQUEST=LIST Selection |
IRRACX01 | ACEE Compression/Decompression Exit |
IRRACX02 | ACEE Compression/Decompression Exit |
IRREVX01 | RACF Common Command Exit |
ICHRTX00 | MVS Router |
IRRSXT00 | SAF Callable Services Router |
IEFACTRT | SMF Job/Step Termination Exit |
IEFUJI | Job Initiation Exit |
IEFUJP | Job Purge Exit |
IEFUJV | Job Validation Exit |
IEFUSI | Step Initiation Exit |
IEFUSO | SYSOUT Limit Exit |
IEFUTL | Time Limit Exit |
IEFU29 | SMF Dump Exit |
IEFU83 | SMF Record Exit |
IEFU84 | SMF Record Exit |
IEFU85 | SMF Record Exit |
ICQAMFX1 | Application Manager Function Pre-initialization |
ICQAMFX2 | Application Manager Function Post-termination |
ICQAMPX1 | Application Manager Panel Pre-display |
ICQAMPX2 | Application Manager Panel Post-display |
IEEVSNX0 | OPER SEND subcommand Initialization |
IEEVSNX1 | OPER SEND subcommand Pre-display |
IEEVSNX2 | OPER SEND subcommand Pre-save |
IEEVSNX3 | OPER SEND subcommand Failure |
IEEVSNX4 | OPER SEND subcommand Termination |
IKJADINI | ALTLIB Initialization |
IKJADTER | ALTLIB Termination |
IKJCNXAC | CONSOLE Activation |
IKJCNXCD | CONPROFS Pre-display |
IKJCNXCI | CONSPROF Initialization |
IKJCNXCT | CONPROFS Termination |
IKJCNXDE | CONSOLE Deactivation |
IKJCNXPP | CONSOLE Pre-parse |
IKJCNX50 | CONSOLE 80% Message Capacity |
IKJCNX64 | CONSOLE 100% Message Capacity |
IKJCT43I | EXEC Initialization |
IKJCT43T | EXEC Termination |
IKJCT44B | Add Installation-written CLIST Built-in Functions |
IKJCT44S | Add Installation-written CLIST Statements |
IKJEESXA | LISTBC Failure |
IKJEESXB | LISTBC Termination |
IKJEESX0 | SEND Initialization |
IKJEESX1 | SEND Pre-display |
IKJEESX2 | SEND Pre-save |
IKJEESX3 | SEND Failure |
IKJEESX4 | SEND Termination |
IKJEESX5 | LISTBC Initialization |
IKJEESX6 | LISTBC Pre-display |
IKJEESX7 | LISTBC Pre-list |
IKJEESX8 | LISTBC Pre-read |
IKJEESX9 | LISTBC Pre-allocate |
IKJEFD21 | FREE Initialization |
IKJEFD22 | FREE Termination |
IKJEFD47 | ALLOCATE Command Initialization |
IKJEFD49 | ALLOCATE Command Termination |
IKJEFF10 | SUBMIT Command |
IKJEFF53 | OUTPUT, STATUS and CANCEL Commands |
IKJEFLD1 | Logon Authorized Pre-prompt |
IKJEFLD2 | LOGOFF |
IKJEFLD3 | LOGON post-prompt |
IKJEFLN1 | Logon Pre-display |
IKJEFLN2 | Logon Post-display |
IKJEFXG1 | Tailor PUTGET and GETLINE processing |
IKJEFY11 | OUTDES Initialization |
IKJEFY12 | OUTDES Termination |
IKJEFY60 | PRINTDS Initialization |
IKJEFY64 | PRINTDS Termination |
IKJEGASI | TESTAUTH Subcommand Initialization |
IKJEGAST | TESTAUTH Subcommand Termination |
IKJEGAUI | TESTAUTH Initialization |
IKJEGAUT | TESTAUTH Termination |
IKJEGCIE | TEST Subcommand Initialization |
IKJEGCTE | TEST Subcommand Termination |
IKJEGMIE | TEST Initialization |
IKJEGMTE | TEST Termination |
IKJPRMX1 | PARMLIB Initialization |
IKJPRMX2 | PARMLIB Termination |
INMCZ21R | TRANSMIT/RECEIVE NAMES Data Set Pre-allocation |
INMRZ01R | RECEIVE Initialization |
INMRZ02R | RECEIVE Termination |
INMRZ04R | RECEIVE Notification |
INMRZ05R | RECEIVE Acknowledgment Notification |
INMRZ06R | RECEIVE Pre-acknowledgment Notification |
INMRZ11R | RECEIVE Data Set Pre-processing |
INMRZ12R | RECEIVE Data Set Post-processing |
INMRZ13R | RECEIVE Data Set Encryption |
INMRZ15R | RECEIVE Post-prompt |
INMRZ21R | RECEIVE Log Data Set Pre-allocation |
INMXZ01R | TRANSMIT Startup |
INMXZ02R | TRANSMIT Termination |
INMXZ03R | TRANSMIT Encryption |
INMXZ21R | TRANSMIT Log Data Set Pre-allocation |
IRXINITX | REXX Pre-environment Initialization |
IRXITMV | REXX Post-environment Initialization |
IRXITTS | REXX Post-environment Initialization |
IRXTERMX | REXX Environment Termination |
Masks are created by using qualifiers within a volume serial number, Jobname, Program name, TSO User ID, or Terminal ID.
Qualifier | Description |
? | The question mark is used to unconditionally match any single character (except periods) where the question mark occurs in the specification. Multiples are allowed. |
1 | The ampersand is used to unconditionally match any single alpha character where the ampersand occurs in the specification. Multiples are allowed. |
% | The percent sign is used to unconditionally match any single numeric character where the percent sign occurs in the specification. Multiples are allowed. |
- | The dash is used to unconditionally match any preceding or succeeding character(s). Multiples are allowed. |
Example | Explanation |
VOL0%% | Matches any serial number that begins with VOL0 and any two numeric characters: VOL010 |
&%%%%% | Matches any serial number that begins with any alpha character and five numbers. |
Example | Explanation |
SPJTH- | Matches any Jobname that begins with SPJTH |
-SP- | Matches any Jobname that contains the characters SP in any position |
Example | Explanation |
TSOGS%%% | Matches any Terminal ID that begins with TSOGS and three numbers |
Example | Explanation |
DFHSIP | Matches the program name DFHSIP (CICS). |
Dataset Name Groups are used to establish a list of dataset name masks and/or dataset names. This group name is then used in various OS/EM functions instead of specifying the same dataset names in every function.
Build groups as needed. A dataset name or mask may appear in more than one group since each OS/EM function will use Dataset Name Groups in a different way.
Create, change and delete groups by using this dialog. The panels presented allow maintenance of the list of Dataset Name Groups or masks that constitute a group, and add descriptions to groups for documentation purposes.
Refer to the OS/EM User's Guide for detailed information (see Dataset Name Groups).
Dataset name masks are created by using qualifiers within a dataset name. Valid qualifiers are:
Qualifier | Description |
? | The question mark is used to unconditionally match any single character (except periods) where the question mark occurs in the specification. Multiples are allowed. |
1 | The ampersand is used to unconditionally match any single alpha character where the ampersand occurs in the specification. Multiples are allowed. |
% | The percent sign is used to unconditionally match any single numeric character where the percent sign occurs in the specification. Multiples are allowed. |
- | The minus sign is used to unconditionally match a single node of the dataset name. Multiples are allowed. |
+ | The plus sign is used to unconditionally match all characters/nodes of the dataset name beyond where it is entered in the specification. A single plus sign may be specified. |
Example | Explanation |
AA | Specifies single-level dataset AAA |
AA?AA | Specifies a single-level dataset name of five characters. The first and last two characters are AA. The third character can be anything: AA5AA,AABAA, etc. |
AA+ | Specifies any dataset name beginning with the two characters AA: AA55.TEST |
AA- | Specifies a single-level dataset name beginning with the characters AA: AA5PROD |
AA.+ | Specifies a two or more level dataset name. The first node is AA: AA.PROD.COMP |
AA.- | Specifies a two level dataset name. The first node is AA: AA.CICS |
-.AA | Specifies a two level dataset name. The last node is AA: PROD.AA |
SYS1.-.HRP1000 | Specifies a three-level dataset name. The first node is SYS1 |
-.-.- | Specifies any three-level dataset name. This type of specification will match every three-level dataset name within your installation. |
GSAX.-.PRM | Specifies a three-level dataset name. The first node is GSAX |
SYS?.- | Specifies a two-level dataset name. The first node starts with SYS and any other character. The second node can be anything: SYS1.LINKLIB |
SYS1- | Specifies a two-level dataset name. The first node starts with SYS and any other alphabetic character. The second node can be anything: SYSX.LINKLIB |
SYS%.- | Specifies a two-level dataset name. The first node starts with SYS and any other numeric character. The second node can be anything: SYS5.LINKLIB |
SYSX.-.EZT??? | Specifies a three-level dataset name. The first node is SYSX. The second node can be anything. The third node begins with EZT and any three characters: SYSX.CICS.EZT030 |
??SYSUT?.+ | Specifies a two or more level dataset name. The first node begins with any two characters, followed by SYSUT and any other single character. |
AA.+.BB | Specifies a three or more level dataset name. The first node is AA and the last node is BB. |
AA+AA | Specifies a single-level dataset name. The first two characters are AA and the last two characters are AA. The up to four middle characters can be anything. There has to be at least one middle character - AAAA will not match. |
SYSX.PROCLIB | A fully qualified dataset name. |
Volume name groups are used to establish a list of DASD volumes. This group name is then used in various OS/EM functions instead of specifying the same volume serial numbers in every function.
Build groups as needed. A volume serial number may appear in more than one group since each OS/EM function will use volume serial numbers in a different way.
Create, change and delete groups by using this dialog. The panels presented allow specification of a subset of all groups to operate on, add descriptions to groups for documentation purposes, and maintain the list of volume serial numbers or masks that constitute a group.
Refer to the OS/EM User's Guide for detail information (see Volume Groups).
Volume/Jobname masks are created by using qualifiers within a volume serial number or Jobname. Valid qualifiers are:
Qualifier | Description |
? | The question mark is used to unconditionally match any single character (except periods) where the question mark occurs in the specification. Multiples are allowed. |
1 | The ampersand is used to unconditionally match any single alpha character where the ampersand occurs in the specification. Multiples are allowed. |
% | The percent sign is used to unconditionally match any single numeric character where the percent sign occurs in the specification. Multiples are allowed. |
- | The dash is used to unconditionally match any preceding or succeeding character(s). Multiples are allowed. |
Example | Explanation |
VOL0%% | Matches any serial number that begins with VOL0 and any two numeric characters: VOL010 |
&%%%%% | Matches any serial number that begins with any alpha character and five numbers. |
Example | Explanation |
SPJTH- | Matches any Jobname that begins with SPJTH |
-SP- | Matches any Jobname that contains the characters SP in any position |
The SMF records written as an audit trail have the following format:
SMFRCD255 DSECT , SMF255LEN DS BL2'0' RECORD LENGTH SMF255SEG DS BL2'0' SEGMENT DESCRIPTOR SMF255FLG DC BL1'0' HEADER FLAG BYTE SMF255RTY DC BL1'0' RECORD TYPE 0 SMF255TME DC BL4'0' TOD, USING FORMAT FROM TIME MACRO W/BIN. INTVL SMF255DTE DC PL4'0000' DATE IN PACKED DECIMAL FORM: CCYYDDDF SMF255SID DC CL4' ' SYSTEM IDENTIFICATION SMF255JOB DC CL8' ' JOB NAME SMF255SUB DC X'0' SUBTYPE SMF255#SP DC FL1'0' LEADING SPACES SMF255CMD DC CL256' ' COMMAND TEXT ORG SMF255CMD SMF255WTO DC CL256' ' WTO TEXT ORG SMF255CMD SMF255SSN DC CL4' ' SUBSYSTEM NAME SMF255ST3 DS 0CL45 RESOURCE ENTRIES - 1 TO MAX OF 127 SMF255RLN DC X'0' RESOURCE LEN - 0 INDICATES END SMF255RES DC CL44' ' RESOURCE NAME - VARIABLE LEN *
Refer to "SMF Recording" for instructions on activating this option.
See member SMFPRINT in the OS/EM SAMPLIB for a job to print these SMF records.
The following are JES2 commands which control the Job Routing function. Each command is protected by RACF and the resource and command authority needed is listed at the end of the appendix.
Display Backlog This command displays information about jobs in the different JES queues. | |
Display Conflicts This command displays jobs that are currently unable to run on any system in the MAS complex because they have a routing to a resource that is not defined on any member of the MAS. Note that the criterion here is whether the resource is defined to a member of the MAS, not whether that member is currently active. Command Syntax: $DC{,LIST | ,ALL} With no operands, the response is a single line giving the number of jobs unable to run and the number of resources that those jobs require. With the "LIST" parameter, the response is multiple lines giving the number of jobs that need each undefined resource, and the name of those resources. With the "ALL" operand, the response lists each job that is unable to run as well as listing each resource that that job requires. | |
Display Printers/Punches The $DP command gives a simple one-line display for each printer or punch defined to JES, showing its status. Command Syntax: $DP{,PUN} Without the "PUN" operand, each printer is listed. With the "PUN" operand, each punch is listed. | |
Display Resources The $DRESOURCE command lists resources defined to the members of the MAS. These are the resources that are referenced by the /*ROUTE JECL statements and by the JOBNAME and PROGRAMNAME routing functions. Command Syntax: $DRESOURCE{,ALL | ,SID} Note: The command may be abbreviated to $DRE, but no shorter as it would then be interpreted as the standard JES2 $DR command. With no operands, the command produces a list of those resources attached to the MAS member where it was issued. With the "ALL" operand, it lists resources for all the members of the MAS. With the "SID" operand (the system ID of a specific MAS member), it lists the resources attached to that specific member. | |
List Forms The $LF command lists the work that exists in the hardcopy queue, grouped by form, prmode, dest, writer, burst and select. For each unique combination of the above, the number of sysout datasets in each class is listed. The scope of the command may be changed by entering additional selection criteria on the command. Command Syntax: $LF{,F=xxxxxxxx} Select by FORM {,W=xxxxxxxx} Select by WRITER {,PRMODE=xxxxxxxx} Select by PRMODE {,C=xxxx} Select by FCB {,T=xxxx} Select by UCS {,J=Jnnnnn{-nnnnn} | ,J=Snnnnn{-nnnnn} | ,J=Tnnnnn{-nnnnn}} Select by JOB/STC/TSU numbers {,R=xxxxxxxx{-xxxxxxxx}} Select by Destination. Operand is NODE or NODE1-NODE2, RMT or RMT1-RMT2, NODE.RMT or NODE1.RMT1-NODE2.RMT2, NODE.USERID or USERID {,Q=x...} Select by SYSOUT classes {,LIM=nnn{-nnn}} Select by LINE number range {,PLIM=nnn{-nnn}} Select by PAGE number range {,D=A |,D=H} Select ALL or HELD {,B=Y |,B=N} Select by BURST=YES or BURST=NO {,S=Y |,S=N} Select by SELECTABLE or NOT SELECTABLE {,JOBS} Request that DISPLAY be broken down by individual JOBS | |
List JOBQUEUE by NAME This command produces a detailed list of jobs awaiting execution by jobname, showing resources, DJC holds and such. Command Syntax: $LN{,ALL{,IND} | Select by SYSAFF ALL ,ANY | Select by SYSAFF ANY ,SID | Select by SYSAFF to a SID ,IND} Select by independent mode {,V=xxxxxx} Select by SPOOL VOLSER {,AFTER=xxxxxxxx{(nnnnn)} Select by AFTER specification JOBNAME and optional JOB NUMBER {,BEFORE=xxxxxxxx{(nnnnn)} Select by BEFORE specification JOBNAME and optional JOB NUMBER {,WITH=xxxxxxxx{(nnnnn)} Select by WITH specification JOBNAME and optional JOB NUMBER {,PRED=xxxxxxxx{(nnnnn)} Select by PRED specification JOBNAME and optional JOB NUMBER {,EXCLUDE=xxxxxxx{(nnnnn)} Select by EXCLUDE specification JOBNAME and optional JOB NUMBER {,CNTL=xxxxxx} Select by CNTL resource, resource is 1-44 characters, alpha, numeric, national, underscore and period. Period cannot be first or last character {,RES=xxxxxx} Select by ROUTING resource Resource is 1-44 characters, alpha, numeric, national, underscore and period. Period cannot be first or last character {,ROUTE=nnn{-nnn}} Select by execution routing NODE or NODE.RMT. No USERIDS. {,Q=XEQ | Select XEQ queue ,Q=CNV | Select CONVERT queue ,Q=STC | Select STCS ,Q=TSU | Select TSUS ,Q=HOLD | Select HELD jobs ,Q=READY | Select READY jobs ,Q=ACTIVE | Select ACTIVE JOBS/STCS/TSUS ,Q=DJCOWN | Select owners of DJC resources ,Q=DJCHOLD} Select JOBS held for DJC {,C=x{-x}} Select by JOBCLASS range; classes may be A-Z, 0-9, * (CNV), $ (STC), or @ (TSU). | |
List JOBQUEUE This command produces a summary list of jobs awaiting execution. The Command Syntax is the same as the $LN command. | |
Resource ADD command. | |
Resource DELETE command. These two commands allow you to ADD and DELETE resources from a MAS member. Command Syntax: $QA | $QD $QA = ADD, $QD = DELETE ,xxxx Resource name (1-44 bytes) {,SID} SID where add/delete is to take place. The default is the system where the command is entered. {,FORCE} (DELETE only) Delete the resource even if the resource is currently in use by an active job on the targeted system. | |
These commands add and delete DEPENDENT JOB CONTROL (DJC) conditions, routing resources and CNTL specifications for jobs already in the job queue. Command Syntax:
$Q'xxxxxxxx' Specify JOBNAME $QJnnnnn{-nnnnn} Specify JOB number(s) {,HSMRETRY} Retry failed HRECALLs {,RELEASE(HSM)} Do not hold job for HRECALLs {,HOLD(HSM)} Hold job for HRECALLs {,RELEASE(USERLIMIT)} Do not hold job for user limits {,HOLD(USERLIMIT)} Hold job for user limits {,RELEASE(PGMLIMIT)} Do not hold job for program limits {,HOLD(PGMLIMIT)} Hold job for program limits {,JOBROUTE=xxxxxx,NODE=nnnn} Route specified job(s) to node if JOBROUTE resource is assigned to the job {,RELEASE(DJC)} Do not hold job for dependent job controls {,NOAFTER} Remove AFTER conditions {,NOPRED} Remove PRED conditions {,NOBEFORE} Remove BEFORE conditions {,NOWITH} Remove WITH conditions {,NOEXCLUDE} Remove EXCLUDE conditions {,NOCNTL} Remove all CNTL resources {,NOROUTE} Remove all ROUTING resources {,ADDRES=xxxx} Add ROUTING resource. OBSOLETE. Use ROUTE instead. {,DELRES=xxxx} Remove ROUTING resource. OBSOLETE. Use ROUTE instead. {,ROUTE=({+|-}xxxx,....)} Add or remove ROUTING resource. + (ADD) and - (DEL) are optional and default to add. 1 to 8 resources may be specified. Enclosing parens are optional if only 1 resource is specified. NOTE: A job can never have more than 8 ROUTING resources. {,CNTL=({+|-}xxxx{-SHR | -EXC},...) Add or remove CNTL resource. + (ADD) and - (DEL) are optional and defaults to ADD. -SHR and -EXC are optional and default to -SHR. Not meaningful for delete. 1 to 8 CNTLs may be specified. Enclosing parens are optional if only 1 resource is specified. NOTE: A job can never have more than 8 CNTL resources. {,AFTER=({+|-}xxxxx{(nnnn|wait|mult)},...) Add or Remove AFTER resource. + (ADD) 1- (DEL) are optional and default to ADD. JOBNUM, WAIT and MULT are optional. WAIT and MULT are not valid for DELETE. Up to 10 AFTER statements may be specified with the constraint that a job may never have more than 10 DJC entries of all types combined. The enclosing parentheses are optional if only one job is specified. {,BEFORE=({+|-}xxxx{(nnnn|OK|MULT)},...) Add or remove BEFORE resource. + (ADD) 1- (DEL) are optional and default to ADD. JOBNUM, and MULT are optional. MULT and OK are not valid for DELETE. Up to 10 BEFORE statements may be specified with the constraint that a job may never have more than 10 DJC entries of all types combined. The enclosing parentheses are optional if only one job is specified. {,EXCLUDE=({+|-}xxxx(nnnn|OK|MULT)},...) Add or remove EXCLUDE resource. + (ADD) 1- (DEL) are optional and default to ADD. JOBNUM and MULT are optional. MULT and OK are not valid for DELETE. Up to 10 exclude statements may be specified with the constraint that a job may never have more than 10 DJC entries of all types combined. The enclosing parentheses are optional if only one job is specified. {,PRED=({+|-}xxxx{(nnnn|WAIT|MULT)},...) Add or remove PRED resource. + (ADD) 1- (DEL) are optional and default to add. JOBNUM, WAIT and MULT are optional. WAIT and MULT are not valid for DELETE. Up to 10 PRED statements may be specified with the constraint that a job may never have more than 10 DJC entries of all types combined. The enclosing parentheses are optional if only one job is specified. {,WITH=({+|-}xxxx{(nnnn|WAIT|mult)},...) Add or remove WITH resource. + (ADD) 1- (DEL) are optional and defaults to ADD. JOBNUM, WAIT and MULT are optional. WAIT and MULT are not valid for DELETE. Up to 10 WITH statements may be specified with the constraint that a job may never have more than 10 DJC entries of all types combined. The enclosing parentheses are optional if only one job is specified. |
Command | Resource Name | Authority |
---|---|---|
$DB | jesx.DISPLAY.OSEM | READ |
$DC | jesx.DISPLAY.OSEM | READ |
$DP | jesx.DISPLAY.OSEM | READ |
$DRESOURCE | jesx.DISPLAY.OSEM | READ |
$LF | jesx.DISPLAY.OSEM | READ |
$LN | jesx.DISPLAY.OSEM | READ |
$LQ | jesx.DISPLAY.OSEM | READ |
$QA | jesx.ADD.OSEM | CONTROL |
$QD | jesx.DELETE.OSEM | CONTROL |
$Q' | jesx.MODIFY.OSEM | UPDATE |
$QJ | jesx.MODIFY.OSEM | UPDATE |
Replace jesx with the name of your JES2 subsystem.
Note: All listed resources are defined to the OPERCMDS class.
These cards provide a facility by which jobs can be routed to specific CPUs depending on the availability of a particular resource name assigned to a CPU. Resource names are user defined and specified with the $QA command. Once defined, these resource names attached to a CPU remain in effect until they are detached via the $QD command.
Resources specified can define physical I/O units which may be attached to only one CPU at a time, or possibly a software name which may only pertain to one particular CPU.
The format of the resource routing JCL statement is:
/*ROUTE XEQ resourcename
The card must follow the JOB statement.
Note: This card is not required if the optional routing rules defined with OS/EM in JES2 EXIT5 are used.
Following are some examples of using the ROUTE XEQ control card:
System # Resources Attached 1 DUALD, IMS 2 3525 3 IMS,TSO,NOINQ //BSPROUT JOB (,,,7552,429),'TEST RESOURCE',CLASS=A /*ROUTE XEQ IMS //S1 EXEC PGM=IEFBR14
This job will be scheduled to either system #1 or system #3 because of the IMS resource requested.
The $DC command is used to display those jobs which have used the /*ROUTE XEQ resource control card and no CPUs have that resource name attached. For example, using the above list, if a job were submitted with a '/*ROUTE XEQ SCANNER' control card, the job would never execute no matter how many initiators were available until a $QA,SCANNER command was issued on a system in the complex. This would be detectable by issuing a $DC command which would display those jobs waiting for resource names.
Other /*ROUTE control cards formats are:
/*ROUTE XEQ HERE
The resource name 'HERE' causes the job to be scheduled for execution on the CPU which read the JCL (controlling the card reader.)
Note: Do not have an initiator add the SYSAFF=* parameter to a job as this overrides OS/EM Job Routing.
The CNTL and THREAD cards are processed identically.
This feature provides the ability to single-thread jobs through execution which need a device of which there is only one and must be used serially. Some examples would be the 3525, DUAL density drive and the OCR scanner.
By using the /*CNTL card, you can define a resource name that you need exclusive control of. If any other jobs come into the system with the same control name, they will not execute simultaneously on the same or other CPUs in the complex. This provides better control over the resources that must be used serially. This does not affect jobs running without the /*CNTL card or running in a system without shared spool.
The format for resource control is:
/*CNTL resourcename,EXC or /*CNTL resourcename,SHR (the default is SHR)
Users may also protect datasets from being updated by different jobs on the same or different CPUs by using the /*CNTL card. Each /*CNTL card may have a 1 to 44 character control name and an EXC or SHR specification.
Jobs with the same control name will not execute simutaneously if one of the jobs has an EXC control specification. Jobs with SHR may execute simutaneously on any CPU.
Following are /*CNTL and /*THREAD usage examples:
//JOB1 JOB /*CNTL MASTER,EXC //JOB2 JOB /*THREAD MASTER,SHR
In the above example, whichever job began execution first, would lockout the other job from beginning until it has completed.
//JOB1 JOB /*CNTL MASTER,SHR /*CNTL PINOT_NOIR,SHR /*THREAD SYS1.LINKLIB,SHR /*CNTL DUALDENS,SHR /*CNTL CABERNET,SHR //JOB2 JOB /*CNTL MASTER /*CNTL DUALDENS //JOB3 JOB /*THREAD MASTER,SHR /*THREAD PINOT_NOIR
In the above example, all 3 jobs could run simultaneously as they all specify the SHR option. Up to 8 CNTL cards may be specified at one time.
In the following syntax diagrams, the first optional parameter indicates the action to be taken if the referenced job is not in the execute queue, (for the /*BEFORE card, the job must also not yet be executing). If a specific job is referenced, i.e. the job number is supplied, only the IGNORE and FAIL options are acceptable. The IGNORE option indicates that the card is to be treated as a comment. The FAIL option indicates that the job is to be failed by passing a return code of 12 back to JES2. The OK option indicates that the statement will apply to all jobs with the specified jobname. The WAIT option indicates that the job is to wait until a job with the specified jobname is read into the system.
The second optional parameter indicates what action to take if there are multiple jobs in the system with the specified jobname. This situation can never arise if a job number is specified as there can only be one job with a given job number. The options are processed the same as the first option.
Note: There may be 10 Dependent Job Control statements per job.
The purpose of these options is to override, for an individual statement, the default options set by the OS$CNTL command.
/*AFTER XXXXXXXX{(NNNNN)}{,IGNORE}{,IGNORE} {,FAIL }{,FAIL } {,WAIT }{,OK } {, } /*BEFORE XXXXXXXX{(NNNNN)}{,IGNORE}{,IGNORE} {,FAIL }{,FAIL } {,OK }{,OK } {, } /*EXCLUDE XXXXXXXX{(NNNNN)}{,IGNORE}{,IGNORE} {,FAIL }{,FAIL } {,OK }{,OK } {, } /*PRED XXXXXXXX{(NNNNN)}{,IGNORE}{,IGNORE} {,FAIL }{,FAIL } {,WAIT }{,OK } {, } /*WITH XXXXXXXX{(NNNNN)}{,IGNORE}{,IGNORE} {,FAIL }{,FAIL } {,WAIT }{,OK } {, }
These cards provide a means to schedule jobs before, after or with another. The control card follows the jobcard or any other JES2 control cards (ROUTE - CNTL).
Following is an example of the use of these control cards:
/*PRIORITY 13 //BSPTEST JOB (,,7552,429),RUSBASAN,MSGLEVEL=(1,1),CLASS=A /*ROUTE XEQ MSS /*AFTER BSPFIRST,WAIT //S1 EXEC PGM=IEFBR14 /* /*PRIORITY 2 //BSPFIRST JOB (,,7552,429),RUSBASAN,MSGLEVEL=(1,1),CLASS=A /*ROUTE XEQ CPU2 /*CNTL DUAL,EXC //SA EXEC PGM=IEFBR14 /*
In the above example, job BSPTEST would not execute until job BSPFIRST has finished execution, even though BSPTEST has a higher priority.
The following messages may be issued by the OS/EM Job Routing option:
$HASP606 INSUFFICIENT OPERANDS
$HASP608 OS/EM STATUS UNKNOWN
$HASP610 JOB(S) NOT FOUND
$HASP619 NO OUTPUT QUEUED
$HASP624 'CMD' 'JOBNAME' MULTIPLE JOBS FOUND
$HASP646 nn PERCENT SPOOL UTILIZATION
$HASP668 NO DEVICE(S) FOUND
$HASP687 UNABLE TO OBTAIN SECURITY PRODUCT MESSAGES
$HASP690 COMMAND REJECTED - AUTHORIZATION FAILURE
$HASP900 TOO MANY|FEW OPERANDS
$HASP901 INVALID OPERAND xxxxx
$HASP905 RESOURCE IN USE. YOU MUST USE 'FORCE' TO DELETE
$HASP907 JOBNAME xxxx IS NOT SUITABLE FOR DJC
$HASP908 NO MATCH FOUND FOR SPECIFIED RESOURCE
$HASP921 LIST FORMS (multiple texts)
$HASP928 DEVICE UNIT STATUS F=FORM Q=X
$HASP931 * -- JOBROUTE FAILED - ALREADY 8 ROUTES IN USE
$HASP935 jjjj(nnn) JOBNAME SPECIFIED ON /*BEFORE STATEMENT IS INVALID. CORRECT - RESUBMIT.
$HASP936 jjjj(nnn) JOBNAME SPECIFIED ON /*AFTER STATEMENT IS INVALID. CORRECT - RESUBMIT.
$HASP937 jjjj(nnn) PARM SPECIFIED ON /*CNTL STATEMENT IS INVALID. CORRECT - RESUBMIT.
$HASP938 jjjj(nnn) ONLY n xxxxx STATEMENTS ALLOWED. CORRECT - RESUBMIT.
$HASP939 jjjj(nnn) JOBNAME SPECIFIED ON /*WITH STATEMENT IS INVALID. CORRECT - RESUBMIT.
$HASP940 jjjj(nnn) * -- AFTER JOBNAME = xxxx --
$HASP941 jjjj(nnn) * -- WITH JOBNAME = xxxx --
$HASP942 jjjj(nnn) * -- RESOURCE ROUTING = xxxxx --
$HASP943 jjjj(nnn) * -- CONTROL INFO = xxxxx --
$HASP944 jjjj(nnn) * -- BEFORE JOBNAME = xxxx --
$HASP945 LIST JOBQUEUE (multiple texts)
$HASP946 SID - NO RESOURCES ATTACHED
$HASP947 DISPLAY RESOURCE (multiple texts)
$HASP948 DISPLAY CONFLICT (multiple texts)
$HASP949 DISPLAY BACKLOG (multiple texts)
$HASP950 jobname(JOBnnnn) * -- JOBROUTE 999 xxxxxx = y --
$HASP951 OS/EM VER n.n - JOBROUTING ACTIVE
The following operator commands are available to control TAPESHR functions.
In the following command formats, dev_spec refers to the syntax allowed for ordinary MVS vary commands, e.g. 580 or 580-581 or (580,582-588), etc.
To vary a device onto TAPESHR control, that is to have TAPESHR assume control of varying the device online and offline as needed to fulfill the needs of the various systems.
To cause TAPESHR to relinquish control of a device.
To indicate to TAPESHR that a device is not to be used, that is brought online, on this system only. The device is still eligible for use on other systems.
To indicate to TAPESHR that a device which was previously varied offline locally may once again be used on this system. This command must be issued on the same system as the VARY OFFLINE,LOCAL command.
To indicate to TAPESHR that a device is not to be used by any system in the complex. This command may be issued on any system.
To indicate to TAPESHR that a device that was previously varied offline globally may now be used again. This command may be issued on any system.
Displays all devices defined to tapeshr and their current status (see Display Units command below for a list of status codes).
A modify command is available to shut down OS$TPSHR.
F OS$TPSHR,STOP {option}
Where {option} is:
This causes TAPESHR to wait until all owned tape devices have gone offline and so may safely be used by other systems where TAPESHR is still active.
The devices which do not go offline within 15 seconds will be removed from TAPESHR control and it will become the operator's responsibility to coordinate the use of those devices on the various systems. Note that if any uncontrolled device is eligible for use when a job goes into allocation recovery, TAPESHR will not participate in device selection other than to remove all TAPESHR devices from the candidate list, thus forcing the job to use an uncontrolled device.
The devices which do not go offline within 15 seconds will be marked as globally offline to protect them from being allocated by another TAPESHR system. After the devices go offline on the system where TAPESHR is being terminated, the operator may issue a command to vary them back online globally to make them available to the other systems where TAPESHR is still active.
You may optionally use the STOP command (P).
P OS$TPSHR
The STOP command uses the GLOBALOFFLINE option.
The Display Units command has been enhanced to show the TAPESHR status of those devices controlled by TAPESHR. The additional data includes the system currently owning the device (or the word 'NONE'). There may also be additional characters appended to show additional information. These include:
Indicates the device is allocated.
Indicates local offline.
The -LO status can be removed by issuing a vary online,local command.
Indicates the device is pending local offline.
Indicates global offline.
The -GO status can be removed by issuing a vary online,global command.
Indicates pending global offline.
Indicates error offline. Error offline indicates that an attempt was made to vary the device online and the system was unable to bring the device online for some (usually hardware) reason. This status can be cleared by re-issuing the vary online command once the problem has been resolved.
Indicates restricted device.
Indicates pending status.
Indicates device being deleted.
The success of this manual depends solely on its usefulness to you. To ensure such usefulness, we solicit your comments concerning the clarity, accuracy, completeness, and organization of this manual. Please enter your comments below and mail this form to the address on the front page of this manual. If you wish a reply, give your name, company, and mailing address. We would also appreciate an indication of your occupation and how you use this manual.
Please rate this manual on the following points: