Version 4.2
GI10-5141-00
Program Number: 5697-D17
Program Number: 5697-D18
Program Number: 5697-D19
Program Number: 5697-D20
Program Number: 5697-D21
Program Number: 5697-D22
12th August 1997
First Edition (March 1999)
This edition applies to:
and to all subsequent versions, releases, and modifications until otherwise indicated in new editions. Consult the latest edition of the applicable system bibliography for current information on these products.
This softcopy version is based on the printed edition of this book. Some formatting amendments have been made to make this information more suitable for softcopy.
Order publications through your IBM or Transarc representative or through the IBM branch office serving your locality.
At the end of this publication is a topic titled "Readers' Comments". If you want to make comments, but the methods described are not available to you, please address them to:
Transarc Corporation, The Gulf Tower,When you send information to IBM or Transarc, you grant them a nonexclusive right to use or distribute the information in any way they believe appropriate without incurring any obligation to you.
Copyright International Business Machines Corporation 1999; Transarc Corporation, 1999. All rights reserved.
Note to U.S. Government Users -- Documentation related to restricted rights -- Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule contract with IBM Corp.
Installing the Certified Patch for TXSeries CICS
Installing the Certified Patch for TXSeries Encina
A Transarc(R) Certified Patch Level (CPL) is a collection of patches that are packaged and tested together for a given product. Certified Patches are created and tested individually for each Transarc product. For example, to create a Certified Patch for Encina, previously released patches and recent fixes for Encina are packaged together and then installed and tested on multiple platforms.
Typically, a product patch contains updated versions of selected binaries, libraries, message catalogs, and other files, which correct problems that were found in the product. New patches are created for a product as frequently as necessary and made available as quickly as possible, particularly when a new patch corrects a severe problem. In most cases, product patches are cumulative; each patch includes all previous corrections as well as any new ones.
Periodically, a version of the product patch is selected for certification. To become certified, the patch is tested extensively and documented thoroughly. If problems are found during certification testing, they are corrected, and those changes are included in the Certified Patch. Upgrading a product by installing a Certified Patch produces a version of the product that is as stable and usable as possible.
CPL4 includes Certified Patches for the following products:
This document is written for system administrators and programmers responsible for configuration, administration, and customization of systems that use the Transarc software listed in the Introduction. This document assumes that readers are familiar with system administration and programming in general and with the platforms they are using.
This document also provides information about updates to the product that can be useful to system administrators and programmers.
This document has the following organization:
This chapter describes the procedure for downloading the Certified Patch for IBM(R) TXSeries(TM) 4.2 for Solaris 2.5.1 and the prerequisites that you must meet before downloading the patch.
Note: | If you wish to be notified via e-mail about future patch and Certified Patch Level releases, subscribe to the Patch Notification mailing list at the following URL: |
This section describes the requirements that you must meet to download the Certified Patch for TXSeries 4.2 for Solaris 2.5.1.
You must have a valid Transarc Customer ID and password to download the Certified Patch from Transarc's Web site. Your Transarc Customer ID indicates that you have a current Transarc support contract for this product.
Contact your Transarc Support Representative (412-281-5852 or support@transarc.com) to have a Transarc Customer ID and password assigned to you.
The Certified Patch is delivered as a compressed file. Before downloading this file to your machine, you must ensure that enough space is available to accommodate it. See the Transarc Web site for information about the size of the Certified Patch.
There are two methods for obtaining the latest Certified Patch for TXSeries for Solaris:
The following sections describe each of these methods.
To download the Certified Patch from Transarc's Web site, use the following procedure:
# mkdir /tmp/patch_directory
# cd /tmp/patch_directory
Note: | The patch file is downloaded as a compressed tar file. When specifying the location in which to save the file, use the .tar.Z filename extension to indicate that the file is a compressed tar file. |
# zcat cpl_file | tar -xvf -
You have now successfully downloaded and unpacked the Certified Patch. Follow the installation instructions to install the Certified Patch on your system.
Transarc can ship a Certified Patch on magnetic tape upon request. Magnetic tapes can be created with the following formats:
Contact your Transarc Support Representative to request a magnetic tape.
This chapter describes the installation procedure for installing the Certified Patch for TXSeries CICS 4.2 for the Solaris 2.5.1 operating environment, and the prerequisites that you must meet before beginning installation.
Table 1 lists the prerequisites for installing CPL4.
Table 1. Prerequisites for Installing CPL4
Component | Requirement |
Operating system | Solaris 2.5.1 |
Distributed Computing Environment (DCE) | Transarc DCE 1.1 |
CICS (servers and clients) | TXSeries CICS 4.2 |
Note: | This Certified Patch was tested using Solaris 2.5.1 and Transarc DCE 1.1 at patch level 41. |
You must install the Solaris Recommended Patches from SunSoft. These
Solaris patches are required to correct problems in the Solaris operating
environment. You can determine which patches are already installed on a
machine by using the showrev -p command. Note that SunSoft
can make additional revisions of these patches available at any future time;
always install the latest revision of a patch. This Certified Patch was
tested using the following set of Recommended Patches for the Solaris
2.5.1 operating environment:
patch ID 103461-28 | patch ID 103558-14 | patch ID 103566-39 |
patch ID 103582-18 | patch ID 103594-16 | patch ID 103597-04 |
patch ID 103603-09 | patch ID 103612-47 | patch ID 103622-12 |
patch ID 103630-13 | patch ID 103640-24 | patch ID 103663-15 |
patch ID 103680-02 | patch ID 103686-02 | patch ID 103690-09 |
patch ID 103696-04 | patch ID 103699-02 | patch ID 103738-08 |
patch ID 103743-01 | patch ID 103801-07 | patch ID 103817-03 |
patch ID 103866-05 | patch ID 103879-04 | patch ID 103900-01 |
patch ID 103901-11 | patch ID 103934-08 | patch ID 103959-08 |
patch ID 103981-16 | patch ID 104010-01 | patch ID 104166-03 |
patch ID 104212-13 | patch ID 104220-03 | patch ID 104246-08 |
patch ID 104266-01 | patch ID 104283-04 | patch ID 104317-01 |
patch ID 104331-07 | patch ID 104334-01 | patch ID 104338-02 |
patch ID 104433-09 | patch ID 104489-08 | patch ID 104490-05 |
patch ID 104516-03 | patch ID 104533-04 | patch ID 104560-05 |
patch ID 104595-06 | patch ID 104605-08 | patch ID 104613-01 |
patch ID 104628-05 | patch ID 104650-02 | patch ID 104654-05 |
patch ID 104692-01 | patch ID 104708-15 | patch ID 104735-02 |
patch ID 104736-04 | patch ID 104776-02 | patch ID 104795-02 |
patch ID 104841-03 | patch ID 104893-01 | patch ID 104915-09 |
patch ID 104935-01 | patch ID 104956-04 | patch ID 104958-01 |
patch ID 104960-01 | patch ID 104968-02 | patch ID 104976-03 |
patch ID 105004-10 | patch ID 105050-01 | patch ID 105092-01 |
patch ID 105251-01 | patch ID 105299-01 | patch ID 105310-07 |
patch ID 105324-03 | patch ID 105344-01 | patch ID 105352-01 |
patch ID 105784-02 | patch ID 105789-03 | patch ID 105790-11 |
patch ID 106224-01 | patch ID 106382-01 | patch ID 106662-01 |
patch ID 106663-01 |
|
|
Perform the following steps to install CPL4 for CICS 4.2 on a machine the Solaris 2.5.1 operating environment:
# /opt/cics/bin/dce_login DCE_admin_principal
# opt/cics/bin/cicsstop regionName
# opt/cics/bin/cicstail -r regionName
When the following message appears in the command output, the region has stopped successfully:
*** Shutdown of CICS region "regionName" is complete ***
# opt/cics/bin/cicssfsshut /.:/cics/sfs/sfsServerName
# mkdir /cics/backup_directory
# cd /cics/patch_directory
# ./cicssave.sol -d backup_directory
# pkgadd -d . [-r ./CICSProd.res] [-a ./PTF.admin]
The -r option and its argument are required if you are upgrading a production system; if you are upgrading a development system, do not include them in the command.
Note: | When you have determined that the changes in this patch are satisfactory,
remove the backup directory and the scripts that use it. If you decide
to remove the patch, change to the temporary directory
patch_directory you created in Step 2 and use the
cicsbackout.sol script that is included with the patch to
restore the previous version of CICS, as follows:
# ./cicsbackout.sol -d backup_directory where backup_directory is the same directory that was specified with the cicssave.sol script in Step 6 before the installation of the patch. |
Note: | The pkginfo command can be used after installing or removing a
patch to verify the current CICS patch level. Use the
pkginfo command as follows:
# pkginfo -l |
Note: | If any copy of cicsprCOBOL was built to support transactions using other products (such as a relational database management product), rebuild cicsmkcobol by using the same operands that were used originally. Refer to the CICS Administration Reference for more information. |
# /opt/cics/bin/cicsdb2conf -I -r regionName -C -i instanceName \ -a databaseName -s
You can safely ignore error messages about the creation of queue objects.
In this command, the -I option directs the command to ignore nonfatal errors, and the -s option suppresses creation of the XA definition (XAD). The regionName is the name of the region using the corresponding DB2 instance (instanceName) and DB2 database (databaseName).
Note: | This step is not necessary if you are using CICS-supplied switch load files. |
# /opt/cics/bin/cicssfs /.:/cics/sfs/sfsServerName
# /opt/cics/bin/cicsstart regionName
# /opt/cics/bin/cicstail -r regionName
The region has started successfully when the command output displays the following message:
*** CICS startup is complete ***
To ensure that no errors occurred during the region restart, you can also check the appropriate console.number file in the /var/cics_regions/regionName directory. To determine the appropriate console file, view the region's console.nam file in the same directory.
# /opt/cics/bin/cicsstart regionName
Use the cicstail command as shown in Step 3 to verify that each remote region has been restarted.
# /opt/cics/bin/cicsterm -r regionName
The Certified Patch documentation is included in the CPL4 tar file, along with the updated CICS packages. The documentation is provided in Hypertext Markup Language (READCPL.htm) format. Extract the Certified Patch documentation by using the tar command as follows:
# tar -xvf cpl_file READCPL.ps READCPL.htm
where cpl_file is the pathname of the CPL4 file.
Certified Patch documentation is also available at the following Transarc Web site:
http://www.transarc.com/Library/documentation/cpl
This chapter describes the installation procedure for installing the Certified Patch for TXSeries Encina 4.2 for the Solaris 2.5.1 operating environment, and the prerequisites that you must meet before beginning installation.
Table 2 lists the prerequisites for installing CPL4.
Table 2. Prerequisites for Installing CPL4
Component | Requirement |
Operating system | Solaris 2.5.1 |
Distributed Computing Environment (DCE) | Transarc DCE 1.1 |
Encina (servers and clients) | TXSeries Encina 4.2 |
Note: | This Certified Patch was tested using Solaris 2.5.1 and Transarc DCE 1.1 at patch level 41. |
You must install the Solaris Recommended Patches from SunSoft. These
Solaris patches are required to correct problems in the Solaris operating
environment. You can determine which patches are already installed on a
machine by using the showrev -p command. Note that SunSoft
can make additional revisions of these patches available at any future time;
always install the latest revision of a patch. This Certified Patch was
tested using the following set of Recommended Patches for the Solaris
2.5.1 operating environment:
patch ID 103461-28 | patch ID 103558-14 | patch ID 103566-39 |
patch ID 103582-18 | patch ID 103594-16 | patch ID 103597-04 |
patch ID 103603-09 | patch ID 103612-47 | patch ID 103622-12 |
patch ID 103630-13 | patch ID 103640-24 | patch ID 103663-15 |
patch ID 103680-02 | patch ID 103686-02 | patch ID 103690-09 |
patch ID 103696-04 | patch ID 103699-02 | patch ID 103738-08 |
patch ID 103743-01 | patch ID 103801-07 | patch ID 103817-03 |
patch ID 103866-05 | patch ID 103879-04 | patch ID 103900-01 |
patch ID 103901-11 | patch ID 103934-08 | patch ID 103959-08 |
patch ID 103981-16 | patch ID 104010-01 | patch ID 104166-03 |
patch ID 104212-13 | patch ID 104220-03 | patch ID 104246-08 |
patch ID 104266-01 | patch ID 104283-04 | patch ID 104317-01 |
patch ID 104331-07 | patch ID 104334-01 | patch ID 104338-02 |
patch ID 104433-09 | patch ID 104489-08 | patch ID 104490-05 |
patch ID 104516-03 | patch ID 104533-04 | patch ID 104560-05 |
patch ID 104595-06 | patch ID 104605-08 | patch ID 104613-01 |
patch ID 104628-05 | patch ID 104650-02 | patch ID 104654-05 |
patch ID 104692-01 | patch ID 104708-15 | patch ID 104735-02 |
patch ID 104736-04 | patch ID 104776-02 | patch ID 104795-02 |
patch ID 104841-03 | patch ID 104893-01 | patch ID 104915-09 |
patch ID 104935-01 | patch ID 104956-04 | patch ID 104958-01 |
patch ID 104960-01 | patch ID 104968-02 | patch ID 104976-03 |
patch ID 105004-10 | patch ID 105050-01 | patch ID 105092-01 |
patch ID 105251-01 | patch ID 105299-01 | patch ID 105310-07 |
patch ID 105324-03 | patch ID 105344-01 | patch ID 105352-01 |
patch ID 105784-02 | patch ID 105789-03 | patch ID 105790-11 |
patch ID 106224-01 | patch ID 106382-01 | patch ID 106662-01 |
patch ID 106663-01 |
|
|
Before installing CPL4 on a machine, you must shut down all Encina servers running on that machine, including cell and node managers, application servers, and recoverable servers. If a client is running on the machine, you must shut it down also.
Note: | When you install CPL4 on the machine on which your Encina cell manager runs, you must decide whether you want to keep all of the other servers in the cell running or shut them down. If you shut down the cell manager in a multinode cell without shutting down all of the servers in that cell, any serious messages that are emitted while the cell manager is down can be lost. To avoid this problem, shut down all of the servers in the cell before shutting down the cell manager. You do not need to shut down all of the node managers in the cell; instead, you must shut down only the node manager on the machine on which you are installing CPL4. |
Perform the following steps to install CPL4 for Encina 4.2 on a machine running the Solaris 2.5.1 operating environment:
# /opt/dce/bin/dce_login Encina_admin_principal
# /opt/encina/bin/enconsole /.:/cell_name &
For information on using Enconsole, see your Encina documentation.
# cd /opt # tar -cvf /dev/tape encinalocal encinamirror
# mkdir /encina/backup_dir
# cd /encina/backup_dir
# encsave.sol -d backup_dir
# pkgadd -d /tmp/patch_directory
Note: | When you have determined that the changes in this patch are satisfactory, you
can remove the backup directory and the scripts that use it. If you
decide to remove the patch, use the encbackout.sol script
that is included with the patch to restore the previous version of Encina, as
follows:
# encbackout.sol -d backup_dir where backup_dir is the same directory that was used with the encsave.sol script (see Step 15) before installing the patch. |
Note: | The pkginfo command can be used after installing a patch or
removing a patch to verify the current Encina patch level. Use the
pkginfo command as follows:
# pkginfo -l |
# /opt/encinalocal/cell_name/ecm/rc.encina.cell
In this command, cell_name is the name of your Monitor cell.
# /opt/encinalocal/cell_name/node/node_name/rc.encina.node_name -enable
In this command, cell_name is the name of your Monitor cell, and node_name is the name of the local node manager.
# /opt/encina/bin/enconsole /.:/cell_name &
The Certified Patch documentation is included in the CPL4 tar file, along with the updated Encina packages. The documentation is provided in Hypertext Markup Language (READCPL.htm) format. Extract the Certified Patch documentation by using the tar command as follows:
# tar -xvf cpl_file READCPL.ps READCPL.htm
where cpl_file is the pathname of the CPL4 file.
Certified Patch documentation is also available at the following Transarc Web site:
http://www.transarc.com/Library/documentation/cpl
This chapter describes the contents of CPL4 for TXSeries CICS 4.2 for the Solaris 2.5.1 operating environment.
Description: Executing the EXEC CICS CANCEL command to modify a transaction that was started by using the INTERVAL, PROTECT, and REQID options sometimes caused a system to become suspended.
Solution: A system no longer becomes suspended when using the EXEC CICS CANCEL command.
Description: When using the execution diagnostic facility (CEDF transaction) to debug conversational transactions, a timing problem in the CICS local terminal code caused the terminal to become suspended after the first screen was displayed.
Solution: The timing problem has been resolved, and the CICS terminal no longer becomes suspended.
Description: The maximum size for core files generated for CICS program code and data areas was 64 KB.
Solution: There is no longer a size restriction on core files for CICS program code and data areas.
Description: A transaction started asynchronously on a UNIX terminal deleted a transaction name that was specified in a previous EXEC CICS RETURN TRANSID command.
Solution: The results of a EXEC CICS RETURN TRANSID command are now correctly saved by the terminal. After the asynchronously started transaction completes, the transaction runs when the next AID key on the terminal is pressed.
Description: An exception raised during the execution of an external call interface (ECI) program was not reported by the CICS UNIX ECI client.
Solution: An ECI client now correctly reports exceptions.
Description: On CICS for Solaris, executing the cicssfscreate command while trace was on caused a memory violation.
Solution: The cicssfscreate command now runs while trace is on without causing a memory violation.
Description: The CICS Intercommunication Guide (SC09-3900-00) incorrectly implies that TXSeries CICS 4.2 supports traditional Chinese and the Big 5 character set on Windows NT.
Solution: CICS 4.2 does not support traditional Chinese or the Big 5 character set. See your platform's Quick Beginnings document for a list of languages supported by CICS 4.2.
Description: The CICS macro DFHMDF now supports proposed Micro Focus COBOL extensions to currency picture specifications. The proposed extensions include enabling multiple currency sign clauses to be specified, adding a PICTURE SYMBOL phrase, and enabling the CURRENCY SYMBOL definition to contain one or more characters.
Solution: To enable support for these extensions, the following changes have been made to the DFHMDF macro:
Description: Double-byte characters were not deleted correctly in the CICS-supplied transaction CECI WRITEQ TS.
Solution: Double-byte characters can now successfully be deleted in the CECI WRITEQ TS transaction.
Description: The tab key did not work in the Available CICS Regions panel of a cicsterm session.
Solution: The tab key now works correctly in the Available CICS Regions panel of a cicsterm session.
Description: If you used the CICS-supplied transaction CESN or the CICS EXEC SIGNON command to log into CICS and change your password, the return message indicated only that the password was changed; it did not indicate what user had logged in.
Solution: The message returned in this situation now indicates the user ID as well as the fact that the password has been changed.
Description: If the CICS Installation Verification Program (cicsivp) was used with DB2, product trace failed when the DB2 username or password was not set.
Solution: The CICS IVP program now functions correctly in this situation.
Description: Transaction Definitions (TD) queues on DB2 were not purged after being written and read. TD queues had to be explicitly deleted.
Solution: TD queues on DB2 are now correctly purged from the database after being written and read.
Description: CICS self-consistency checks sometimes incorrectly determined the number of application server logs in use, resulting in the termination of the CICS region.
Solution: CICS self-consistency checks no longer miscount the number of application server logs in use, and CICS regions no longer terminate for this reason.
Description: Null characters in a 3270 datastream were ignored when the datastream was set to use a multibyte character set (that is, when the PS operand on the datastream was set to 8).
Solution: Null characters are now handled correctly in a multibyte 3270 datastream.
Description: If a recoverable temporary storage queue (TSQ) was written, read, and deleted, and another TSQ was then written, attempts to read the second TSQ failed.
Solution: CICS now correctly reads a TSQ that was written after the deletion of a previous TSQ.
Description: CICS applications that used ECI sometimes truncated the length of the communications area (COMMAREA) after making several calls in an extended logical unit of work (LUW).
Solution: COMMAREA lengths are no longer truncated when an application that uses ECI makes multiple calls in an extended LUW.
Description: A CICS application server (cicsas) sometimes failed with signal 30. The symptom records (symrecs) file indicated that the error occurred in the RegDC_CleanUp function.
Solution: CICS now handles this situation correctly, and these failures no longer occur.
Description: A CICS application server (cicsas) sometimes failed with signal 30 after misinterpreting a signal sent by IBM SNA to CICS.
Solution: CICS now interprets IBM SNA signals correctly, and CICS application servers no longer fail under these circumstances.
Description: If the cicstail command was issued without the -r option, the command failed with a syntax error.
Solution: The cicstail command now works correctly if it is issued without the -r option.
Description: On CICS for Solaris, if the cicscp start region command was issued in an environment with the LOCALE variable set to it (Italian), the command failed.
Solution: The cicscp start region command now works correctly when the LOCALE variable is set to it (Italian).
Description: The cicsdb2conf command fails when used with DB2 version 2.1.2 and the AFS file system.
Solution: DB2 2.1.2 is not supported for use with AFS, but you can work around this problem by copying the DB2 binding files from AFS to a local directory on your machine and setting the CICS_DB2CONF_BIND environment variable to the location of that local directory.
Description: The cicsrm command sometimes failed with an illegal address error because of a problem with allocating shared memory.
Solution: The problem with allocating shared memory has been resolved, and the cicsrm command no longer fails with an illegal address error.
Description: Autoinstallation of CICS terminals sometimes failed because the installation process accessed the terminal index without first acquiring a valid mutex.
Solution: The installation process now acquires a mutex before accessing the terminal index, and autoinstallation of CICS terminals works correctly.
Description: An incorrectly sized variable caused the addition of a temporary data queue to a running region to fail with illegal address errors if trace was active.
Solution: The variable is now set to the correct length and no longer causes illegal address errors.
Description: The cicscp start region command sometimes failed when issued in an RPC-only environment.
Solution: The cicscp start region command now successfully starts a region when issued in an RPC-only environment.
Description: A CICS application server terminated abnormally if the EXEC CICS START command was used to pass it data between 32751 and 32767 bytes in length.
Solution: CICS application servers no longer fail under these circumstances.
Description: If the cicsdb2conf command was issued with the -l option, it wrote to an incorrect file.
Solution: The cicsdb2conf command now writes to the correct file when issued with the -l option.
Description: In a CICS RPC-only environment in which two SFS servers or an SFS server and a Peer-to-Peer Communications (PPC) Gateway server were running on the same machine, the servers sometimes attempted to use the same RPC endpoint, resulting in errors.
Solution: The cicscp command now adds unique identifiers to each server's entry in the binding file, thus preventing these errors. If you manually create a binding file entry for an SFS or a PPC Gateway server (that is, after running the cicssfscreate or cicsppcgwycreate commands), perform the following procedure to include a universal unique identifier (UUID) for the server in the server's binding file entry:
% /usr/bin/uuidgen
Make sure to note the UUID generated by the command, because it is necessary to supply it in Step 2.
<server_name> <UUID>@ncadg_ip_udp:<network_address> [<port_number>]
where server_name is the full pathname of the server (for example, /.:/cics/sfs/TASFS1), UUID is the output from the uuidgen command, network_address is the address of the network adapter on which the server is to listen, and port_number is the IP port on which the server is to listen (note that you cannot use a port that is already listed in your TCP/IP services file). The network_address variable is optional and can be specified either in Internet dotted decimal notation (for example, 123.45.67.8) or by a name that can be resolved by a TCP/IP name server. If you do not specify a value for the network_address variable, the default value is the network name of the local machine.
Note: | As soon as you specify the IP port number in the binding file, it is recommended that you add an entry to the TCP/IP services file to indicate that the port is now reserved for your server. Refer to your TCP/IP documentation for more information on the services file. |
Description: RPC failures sometimes caused a deadlock in an external presentation interface (EPI) client.
Solution: RPC failures no longer cause deadlocks in EPI clients.
Description: CICS misinterpreted errors returned by EPI clients, causing incorrect symptoms to be written to the symptom records (symrecs) file.
Solution: CICS now correctly interprets EPI client errors and writes correct information to the symrecs file.
Description: Tracing a stressed CICS region (for example, a region with a large number of users or one that was running a transaction with multiple nested levels of EXEC CICS LINK calls) sometimes resulted in an access violation.
Solution: Tracing a stressed CICS region no longer causes an access violation.
Description: The xa_open string, which includes the database password, is printed in the CICS console files when a region attempts to connect to an XA-compliant database.
Solution: CICS has a new environment variable called CICS_SUPPRESS_XA_OPEN_STRING. If you set this variable, the xa_open string containing the database password is not printed in the CICS console files when a region attempts to connect to an XA-compliant database. To set the CICS_SUPPRESS_XA_OPEN_STRING environment variable, add the following line to the region's environment file (located in the /var/cics_regions/region_name directory):
CICS_SUPPRESS_XA_OPEN_STRING=1
Setting this variable prevents CICS from writing the database password to the console files.
Description: Two new CICS environment variables, CICSDB2CONF_CONNECT_USER and CICSDB2CONF_CONNECT_USING, are now provided for use with the cicsddt and cicsdb2conf utilities.
Solution: To enable the cicsddt and cicsdb2conf utilities to connect to DB2 Universal Database (UDB), set the value of CICSDB2CONF_CONNECT_USER to a valid DB2 UDB user ID and the value of CICSDB2CONF_CONNECT_USING to the user's DB2 UDB password.
Description: Each time a CICS region received an unknown user ID and granted public access rights to the user, the region wrote an ERZ045006Wmessage to the CSMT file.
Solution: You can now prevent CICS from writing this message each time by setting the value of the CICS_SUPPRESS_BAD_USER environment variable to Yes. To set this environment variable, add the following line to the region's environment file (located in the /var/cics/regions/region_name directory):
CICS_SUPPRESS_BAD_USER=Yes
Description: In a distributed transaction processing (DTP) conversation between two CICS regions, if one region issued a SEND WAIT call and then issued an ISSUE SIGNAL call, the data sent in the SEND WAIT call was sometimes lost.
Solution: This error no longer occurs.
Description: If transaction routing was used to run a transaction that was not known on a remote CICS region, CICS returned a misleading error message.
Solution: The error message returned in this situation now specifies that the transaction is unknown on the remote region.
Description: If a CICS Common Client using the Systems Network Architecture (SNA) protocol issued the CCIN transaction to a CICS region, the transaction failed with abend code A42E.
Solution: A CICS Common Client using the SNA protocol can now successfully issue the CCIN transaction to a CICS region.
Description: If a resource manager failed before the current unit of work was completed and returned the code XAER_NOTA in response to an xa_end call from CICS, CICS treated the code as a severe error.
Solution: CICS now treats the XAER_NOTA return code as an indication to roll back the current unit of work and no longer considers it a severe error.
Description: In an RPC-only environment, the cicscp command sometimes attempted to add redundant files for a region to an SFS server; the command sometimes failed under these circumstances.
Solution: The cicscp command no longer attempts to add files for a region to an SFS server if the files have already been added to the SFS server, and these failures no longer occur.
Description: The C example database programs for Sybase sometimes caused truncation failures.
Solution: The example programs no longer cause truncation failures.
Description: The output of the CEMT SET PROGRAM ALL, CEMT SET TERMINAL ALL, and CEMT SET TRANSACTION ALL transactions listed each item multiple times.
Solution: The output of these transactions now lists each item only once.
Description: If a command that updates the CICS runtime database (for instance, the cicsadd and cicsupdate commands) was issued before the cicssetupclients command was issued, the command failed with no indication of what caused the failure.
Solution: In this situation, commands such as cicsadd and cicsupdate now return a message stating that the cicssetupclients command has not been run.
Description: The cicsupdateclass command failed with an access violation if it was issued without the -r option.
Solution: The cicsupdateclass command no longer fails if it is issued without the -r option.
Description: If the CICS-supplied transaction CDCN was used in an X Window environment, the debugger window did not open because CICS used the X Window display specification incorrectly.
Solution: The CDCN transaction now works correctly with X Window.
Description: CICS did not support the XA dynamic registration functionality available in Oracle version 8.
Solution: CICS now supports the XA dynamic registration functionality available in Oracle version 8. As a result, a new switch load file (oracle8_xa.c) and a new makefile (oracle8_xa.mk) are included with CICS.
Description: Different CICS regions sometimes allocated the same names to different autoinstalled terminals, causing errors when transaction routing was used.
Solution: CICS regions now use random numbers to name autoinstalled terminals, thus greatly reducing the chances of name duplication.
Description: The cicscp command sometimes became suspended when it was used to start a Structured File Server (SFS) server in an RPC-only environment.
Solution: The cicscp command no longer becomes suspended when used to start an SFS server in an RPC-only environment.
Description: On CICS for Solaris, in a remote procedure call (RPC)-only environment, the cicscp stop dce command returned a failure message even though the command ran successfully.
Solution: The cicscp stop dce command no longer returns false failure messages.
Description: If a Micro Focus COBOL program did not have a SPECIAL-NAMES paragraph in the ENVIRONMENT DIVISION, the CICS COBOL translator (cicstcl) inserted a SPECIAL-NAMES paragraph that was not terminated by a period (.) in the translator output file. The missing period resulted in a compiler warning.
Solution: The CICS COBOL translator (cicstcl) now correctly terminates an inserted SPECIAL-NAMES paragraph with a period, and compiler warnings no longer occur in this situation.
Description: A segmentation fault occurred if the Distributed Computing Environment (DCE) credentials if an external call interface (ECI) client expired.
Solution: If an ECI client's DCE credentials expire, the client now terminates gracefully, rather than failing with a segmentation fault.
Description: A corrupted queue of asynchronously started transactions sometimes caused a CICS region to fail.
Solution: This situation no longer causes CICS regions to fail.
Description: On CICS for Solaris, the EPI workload example programs incorrectly reported microsecond values as nanosecond values.
Solution: The EPI workload example programs on all platforms now correctly report microsecond values.
Description: Attempts to connect to a CICS region from a CICS Common Client sometimes failed with communications errors because CICS closed sockets that were still allocated to connections. These errors sometimes also resulted in the CICS region terminating abnormally.
Solution: CICS no longer closes sockets that are allocated to connections, thus preventing many communications errors. When communications errors do occur, CICS writes diagnostics to the symrecs file instead of abnormally terminating the region.
Description: The cicstfmt command failed to release memory and sometimes failed, producing a core file.
Solution: The cicstfmt command now works correctly.
Description: The cicsinstall command failed on machines on which the CICS server code but not the CICS client code was installed.
Solution: The cicsinstall command now works correctly on machines that are installed with the CICS server code but not the CICS client code.
Description: Error messages returned by the cicsteld command did not include the exception that caused the error.
Solution: Error messages returned by the cicsteld command now include the exception that caused the error.
Description: CICS introspects (checks of a region's integrity) sometimes failed incorrectly, reporting corruption when no corruption existed.
Solution: Introspects no longer fail incorrectly or falsely report corruption.
Description: It was not possible to change a DCE password from within a CICS application server (cicsas) if the application server previously called Micro Focus COBOL code.
Solution: Using Micro Focus COBOL with a CICS application server no longer prevents you from changing a DCE password within the application.
Description: If a transaction that had been passed a COMMAREA terminated abnormally, CICS sometimes attempted to deallocate the wrong area of memory when cleaning up the transaction. This resulted in access violations.
Solution: CICS now correctly deallocates memory in this situation, and access violations no longer occur.
Description: A race condition within CICS sometimes caused CICS to fail with a segmentation fault if trace was active.
Solution: This race condition no longer occurs.
Description: In an RPC-only environment, issuing the cicscp command to start an SFS server sometimes resulted in an access violation.
Solution: The cicscp command no longer produces access violations when used to start an SFS server.
Description: If the cicstran command was used to translate a Micro Focus COBOL program that already included a CONFIGURATION SECTION, the command inserted another CONFIGURATION SECTION. The extra section resulted in a compilation failure.
Solution: The cicstran command no longer inserts an extra CONFIGURATION SECTION into Micro Focus COBOL programs.
Description: If the cicstran command was run on a Micro Focus COBOL program that had an INPUT-OUTPUT section but not an ENVIRONMENT DIVISION, the output of the command included an ENVIRONMENT DIVISION in the wrong position. The incorrectly ordered section caused a failure when the program was compiled.
Solution: The cicstran command now inserts an ENVIRONMENT DIVISION at the correct place in its output in this situation, and compiler errors no longer occur.
Description: On CICS for Solaris, a large number of IP listeners or a transaction with multiple nested levels of EXEC CICS LINK calls sometimes resulted in an access violation.
Solution: CICS for Solaris now successfully handles large numbers of IP listeners and transactions with multiple nested levels of EXEC CICS LINK calls.
Description: A CICS application server sometimes becomes suspended while waiting for a reply from a CICS Common Client.
Solution: In most cases, this is normal behavior. However, to force an application server to time out a transaction if the CICS Client does not reply within a certain period of time, you can set the value of the CICS_XP_RECV_TIMEOUT environment variable to the number of seconds the application server waits before backing out the transaction.
Description: On CICS for Solaris, a stressed CICS region (for example, a region with a large number of users or one that was running a transaction with multiple nested levels of EXEC CICS LINK calls) sometimes failed with an access violation.
Solution: Stressed CICS regions no longer fail with access violations.
This chapter describes the contents of CPL4 for TXSeries Encina 4.2 on the Solaris 2.5.1 operating environment.
Patch 6 contains no additional defects or source code. The patch number has been changed to ensure that all of CPL4 is installed as a single unit.
Description: The Encina++ binding-by-name logic incorrectly ignored the interface's major version number, resulting in clients binding to incompatible servers.
Solution: The client-side binding logic has been changed to ensure that it binds only to servers exporting the same interface version as the client does.
Description: The Data Definition Language (DDL) program sometimes terminated unexpectedly because of a memory violation.
Solution: The delete operation function in the DDL cleanup path, which caused an invalid memory access, has been removed.
Description: The showProcInfo function did not work with the HP-UX 4.02 version of the dde debugger due to changes in this version of the debugger.
Solution: The showProcInfo function now works with any recent version of HP-UX dde.
Description: A server using TMXA_SERIALIZE_ALL_XA_OPERATIONS with TM-XA sometimes became locked if an ancestor transaction was aborted while the descendent was still associated with the resource manager.
This occurred because one thread attempting to perform the abort required the RM lock held by the descendant transaction. When the descendant transaction completed, the tran_End function was blocked because the other thread added a tran_DelayAbort function.
Solution: The TMXA abort-callback now takes the GLOBAL lock, then tries to take the RM lock. If it gets the RM lock, the callback thread proceeds with the XA abort work. If the callback cannot get the RM lock, it records this as a deferred or pending abort, and calls the tran_DelayAbort function to prevent the Distributed Transaction Service (TRAN) from further resolving the transaction. When the RM lock is released, a test is made to see if there are any pending aborts. If there are pending aborts, the thread that owns the RM lock completes the XA abort work for the transactions to be aborted.
Description: If an error was encountered while log archives were being flushed, it was possible for potential waiters to be blocked indefinitely. This was possible when the extent_FlushPrimary function failed and the extent_FlushToArchives function returned a LOG_VOL_ERROR message without clearing the bufferInProgress function and signaling the waiters.
Solution: This problem has been corrected.
Description: A client running on a Solaris 2.6 machine sometimes failed to establish a connection with a Peer-to-Peer Communications (PPC) Gateway server due to an unexpected error status (EACCESS) from the bind system call.
Solution: The logic has been changed to recognize the new error status as a transient error.
Description: Performing an access control list (ACL) check on an Recoverable Queueing Service (RQS) server, when authenticated as the Encina administrator (encina_admin), using the dcecp or enccp commands, sometimes failed with the following message:
Error: operation on acl not authorized
This error occurred because the call to obtain the ACL did not include the caller's handle, so the RQS AclLookup function returned the message sec_acl_not_authorized and a null ACL.
Solution: Local calls (null sec_acl_mgr handle) are now permitted to use the AclLookup function. The implementation of the rdacl function uses this lookup for the rdacl_get_access function and other calls.
Description: Enconsole and enccp failed to cold start an RQS server if it was defined with authentication and authorization disabled. The failure generated the following error message:
initializing rqs server with data volume dataVol ... done adding initial acl, group:encina_admin_group:caxtq ... FAILED Command failed with the following status: DCE-rpc-0044: Unknown interface (dce / rpc) Call to function alibUtils_SetGroupAcl failed with the following status: DCE-rpc-0044: Unknown interface (dce / rpc) ...
Solution: Enconsoleand enccp now cold start an RQS server even if the server was defined with authentication and authorization disabled. The ACL function (rdacl_interface) is now enabled when the RQS server is running with authorization disabled.
Description: After a successful dequeue, an RQS client sometimes noticed that the returned elementType was blank. This occurred when the client was accessing a type that had been destroyed and re-created by another RQS client. The RQS client cached name/ID mappings and the first client was not properly notified when the second client destroyed and re-created the type.
Solution: The RQS_OBSOLETE_ELEMENT_TYPE is now returned to the client and uses the rqs_ElementTypeRetrieve function to refresh cached information. The client-cached type ID does not become out of sync with the server.
Description: Several RQS private_warningscallbacks, issued under ordinary circumstances while reestablishing client callbacks after an unexpected termination of the application, were confusing and sometimes incorrect.
server prod.v4.1.wfmRouteS.hq3y2u04 node hq3y2u04 paNum 0 68 22898 00/01/03-10:08:12.731777 8cbc6427 W CCBM: Simple Refresh RPC aborted, <NULL>. server rqs01 node hq3y2u04 paNum -1 34 29158 00/01/03-10:08:12.872943 8ccc7017 W CM: Reference to client in obsolete server gen, id {serverGen37 serverClientIdx 0}. server prod.v4.1.wfmRouteS.hq3y2u04 node hq3y2u04 paNum 0 68 22898 00/01/03-10:08:12.909600 8cbc8037 W CCBM: Server crashed, restarted. Must reregister. server prod.v4.1.wfmRouteS.hq3y2u04 node hq3y2u04 paNum 0 68 22898 00/01/03-10:08:12.940856 8cbc6837 W CCBM: Successfully refreshed after server crash.
server prod.v4.1.wfmRouteS.hq3y2u04 node hq3y2u04 paNum 0 68 22898 00/01/03-10:08:12.974038 8cb40417 W rqs_StatusToString: Unknown status code: 84017931 server prod.v4.1.wfmRouteS.hq3y2u04 node hq3y2u04 paNum 0 68 22898 00/01/03-10:08:13.007286 8cbc6857 W CCBM: Error RQS_STATUS_CODE_ILLEGAL refreshing callback 0, queue {Header 0, elem 4, uniqueId 11} server prod.v4.2.wiq_pkgS.hq3y2u04 node hq3y2u04 paNum 0 67 22910
Solution: The inappropriate warning messages have been changed to events and several of the fatal messages have been changed to uncond_events. Also, the entries in the status arrays have been initialized to eliminate those types of warnings.
Description: If an unexpected program type attempted to connect to a TCP port, the PPC Gateway program sometimes terminated with the following FATAL error:
6c0c4416 F PPC/TCP: corrupted data received
This sometimes happened accidentally, for example, due to a configuration error in a program using a fixed TCP address, or in a denial-of-service attack on the gateway.
Solution: The original logic for detecting an invalid connection has been changed to ignore such a connection and to generate a WARNING message with diagnostic information to help identify the errant program.
Description: The trdce routine sec_ModifyAcl was altering the IN access control list (ACL) and not creating an OUT ACL in cases where the aclEntry was present in the IN ACL. This resulted in a segmentation violation and sometimes unexpected changes to the default ACLs for an RQS because the IN ACL (which could be a default ACL) possibly was modified unintentionally.
Solution: The trdce routine now creates the OUT ACL, adding or replacing the requested entry as appropriate.
Description: During initialization, if a processing agent (PA) failed while holding the interprocess mutex that is used to protect the reservation information, attempts to restart the PA sometimes failed in the InitSharedPaInfo function because the new mutex could not be obtained.
Solution: The initialization process now waits only 60 seconds to obtain the interprocess mutex. If it cannot be obtained, the process continues under the assumption that a previous PA instance failed while holding it, and that it is safe to continue with initialization and unlock the mutex.
Description: When server-side transactions were used with the TM-XA transaction-duration locking support, the process failed with the following fatal error:
5c0ca036 F Encina Internal Error -- Call your Support Representative : Unable to obtain ancestor for tid 10000 00000006 F . . ./2.0/source/src/server/tmxa/data.c 4041
Solution: The TM-XA threadTid callback has been modified to ignore the TRPC wrapped TID used in server-side transactions.
Description: There were minor spelling errors in the following messages: rec_status.msg, ros_status.msg, trace_private.msg, and trace_tmp.msg.
Solution: The misspellings have been corrected and the message numbers have remained the same. There is no impact on the translations, and the English version is correct.
Description: During restart, a recoverable server (probably the cell manager) encountered the following fatal error:
18744c26 F Recovered Fragment lsn does not correspond to its position in backup stream.
This occurred because the possibility existed for two log extents to have the same log sequence number (LSN). Two extents could have the same LSN if the extents consisted entirely of meta-records (which do not have LSNs assigned) and if the LSNs of the extents were based on the order in which they were read, instead of the order in which they were written.
Solution: Each extent's LSN is now incremented to at least one greater than the first LSN of the previous extent.
Description: A colon (:) could not be used for the path separator in the restart string for recoverable Encina servers on Windows NT, so a semicolon (;) was used instead.
Solution: You can now use the semicolon as a separator on other platforms as well.
Note: | If you use a semicolon on UNIX systems, you possibly need to use quotation marks or escape characters with this character since the semicolon is a meta-character in many UNIX shells. |
Description: A segmentation violation sometimes occurred when a PPC Gateway server received a remote Systems Network Architecture (SNA) request with a userid of exactly eight bytes.
Solution: The problem has been eliminated by null-terminating the string used for userid conversions.
Description: An Object Transaction Service (OTS) (non-DCE) server sometimes terminated with the following error:
Could not calculate XIDs of prepared transactions during recovery.
This failure occurred because during recovery, TM-XA constructs XIDs for all possible nesting models and some possibly exceeded the size limit imposed on XIDs.
Solution: This situation no longer results in a fatal error. An XID constructed for an inappropriate nesting model is 1. The correct XID is generated and matched during recovery.
Description: In a rare race condition, if a Monitor client using explicit reservations (i.e. mon_AcquireReservations) is called to a Monitor Application Server (MAS) with a scheduling policy of MON_EXCLUSIVE, and is terminated while the server is granting the reservation, the server fails to release the interprocess mutex used to protect the reservation state. If this occurs, the server stops servicing requests for any clients requiring explicit reservations.
Solution: The interprocess mutex is now released in this situation, whether or not the reservation is successful.
Description: Enconsole terminated with a fatal error if a user submitted duplicate delete requests for the last message on the Serious Messages screen.
Solution: Enconsole now simply makes a beep sound to indicate that the duplicate requests cannot be processed, instead of terminating with a fatal error.
Description: The OTS sometimes leaked Synchronization objects due to incorrect reference counting.
Solution: The reference counting now works correctly.
Description: An internal data structure was not being initialized properly when a startup or shutdown task was modified for server dependencies. Starting a server set that included several dependencies among servers within the set resulted in the cell manager encountering the following fatal error:
60e85416 F Encina Internal Error -- Call your Support Representative: TaskFinished called for unknown tid: 555683020 00000006 F . . . /field/pdg/2.5/source/src/tpm/cm/cmTasks.c 609
Solution: The internal data structure is now properly initialized when a startup or shutdown task is modified for server dependencies.
Description: The recoverable storage allocator (RSA) used by RQS had a single daemon thread used for servicing a variety of requests, such as merging elements reclaimed from the orphan queue. Under high load conditions, this thread was sometimes starved by forward processing, resulting in higher than expected volume utilization for the RQS data volume.
Solution: Each request is now run in a separate thread. Also, a rare condition that possibly led to the mergeRequested boolean remaining set, which prevented subsequent merges, has been corrected.
Description: Due to a database problem involving repeated checkpoint requests, the node manager became saturated with XA logging requests. This caused a shortage of log space and a dramatic increase in the size of the node manager process.
Solution: Checkpoint requests are no longer taken in response to high use of log space. A warning is now issued when the amount of available log space becomes low.
Description: Transaction_i objects need to be marked uniquely by using a global TID. Otherwise, a quick termination and restart of a transaction-factory might cause a client to complete work on transactions that it did not intend.
Solution: A global TID is now used with the Transaction_i object.
Description: When using an XA-compliant resource manager that does not support migration (such as MQSeries), subtransactions sometimes aborted with the following error:
ENC-tmx-0002 (TMXA_DID_NOT_MIGRATE)
Solution: The TM-XA threadTid callback now correctly sets the RM flags for transaction suspension.
Description: At times, an OTS client did not perform proper initialization and then invoked a request on an object reference for a transactional object in an OTS server. This behavior caused the server to terminate with the following fatal error:
d41fc856 F Encina Internal Error -- Call your Support Representative: Unknown request received 00000006 F src/ots/runtime/common/corba/TranCommData.C 459
Solution: OTS clients now initialize properly.
Description: In high load situations, an RQS server sometimes failed with a segmentation violation. The stack trace from the core file shows the failing thread as follows:
t@48 (1@120 stopped in qsmQset_GetAcl at 0xc0fd4 0x000c0fd4: qsmQset_GetAcl+0x0104: ld [%13 + 0x4], %14 current thread: t@48 => [1] qsmQset_GetAcl ( ) [2] qsm_GetQsetAcl ( ) [3] verifyQsetAuth ( ) [4] qsm_Dequeue ( ) [5] rqsServer_QSDequeque ( ) [6] rqsSrv_TQSDequeque ( ) [7] rqsSrv_TQSDequeque_msr ( ) [8] op11_ssr ( )
This failure was due to a data page being referenced outside any operation and was therefore unprotected from being reused.
Solution: The relevant information from the data page is now extracted from within the operation.
Description: TRAN generated unnecessary commit-with-respect-to (CWRT) message traffic that Orbix could not gracefully handle.
Solution: TRAN no longer sends CWRT requests to other applications when a descendant is still active. CWRT answers are only to sites that send messages containing the wantsOutcome message set. Since the transaction beginner does not require CWRT information from others, wantsOutcome is not set.
Description: The OTS attempted to execute the replay_completion function indefinitely, or attempted to execute both the commit and rollback upcalls if the replay_completion function was retried after the coordinator had finished a transaction.
Solution: The program does not execute the replay_completion function if the transaction has already been resolved.
Description: If you attempted to restore a failed Encina cell manager from a backup of its repository, the attempt failed because hidden attributes in the repository did not enable the restored cell manager to resynchronize with nodes in the cell.
Solution: After you restore a cell manager's data volume, you must resynchronize the values of some special repository attributes maintained by the running node managers and the cell manager. Because a backup of a cell manager's repository captures the state of that cell at a single point in time, the repository information does not necessarily reflect the actual state of the nodes and servers in the cell at the time you perform the restore. To bring the repository up to date, the cell manager must retrieve current attribute values from the node managers. To do this, use the action attribute of cell and node objects to force a resynchronization of these internal attributes between the cell manager and the running node managers.
Backing up and restoring a cell manager's repository
The following procedure describes how to back up and restore a cell manager's repository.
Note: | Encina volumes can be backed up by using any one of the following
methods: tkadmin, an enccp script such as
saveRepository (for the cell manager repository only), or an
operating system command such as the UNIX dd command.
This procedure uses the saveRepository script to back up and then restore a cell manager's repository. The other methods mentioned can be used to back up and restore the entire cell manager data volume. Regardless of the method used, Step 4 (resynchronizing repository attributes) must follow the restore procedure so that a running cell can function normally. |
Backing up a cell manager's repository
The saveRepository script queries the attributes of all objects in the repository (taking a snapshot of the repository) and then creates a script that can be used to restore those attributes. The script generates output in the form of a standalone enccp script. The generated script can be used to recreate all objects and their attribute values following a complete loss of the repository. The restore procedure consists of running the generated script and then resynchronizing the repository so that it reflects the current state of running nodes. After the resynchronization is complete, the cell can function normally. The saveRepository script assumes that the raw disks originally backing the cell manager's volumes still exist, and it restores the same volume objects.
Note: |
|
% dce_login encina_admin
% setenv ENCINA_TPM_CELL cell_name
% saveRepository > restoreCell.ecp
% chmod +x restoreCell.ecp
Restoring a cell manager's repository
After a failure involving loss of a cell manager's data volume, do the following:
% restoreCell.ecp
enccp> ecm modify -action resync
The cell manager issues audit messages when the resynchronization begins and when it is completed. Concurrent resynchronization requests are not permitted. You can resynchronize the entire cell or synchronize nodes one at a time as follows:
Cellwide resynchronization
To resynchronize all nodes in a cell, change the value of the cell object's action attribute to resync. The cell manager attempts to contact all defined nodes and initiates resynchronization with the nodes it is able to contact. (Setting the action attribute for the cell object is the same as setting the action attribute of each node object in the cell.) The cell manager issues a warning for those nodes that cannot be resynchronized--for example, for nodes that are stopped. If servers are running on a node but the node is not running, change the value of the node object's action attribute after restarting the node.
Note: | Resynchronizing the entire cell block's access to the repository until the resynchronization is complete--that is, attempts to start new Enconsole processes or perform operations in existing enconsole processes must wait until the cell can be contacted. If you want to maintain access to the cell during resynchronization, it is recommended that you use per-node synchronization. |
Per-Node synchronization
You can resynchronize nodes in a particular order (based on importance, activity, or number of servers, for example). First determine which nodes are running and then resynchronize just the running nodes. To resynchronize a node, change the value of the node object's action attribute to resync. This starts resynchronization of the desired node.
Description: The OTS incorrectly depended on the function _narrow to throw an exception to determine if a resource was registered with the sub_tran_aware method.
Solution: The OTS now uses the is_nil function to determine if a resource is sub_tran_aware.
Description: A conflict in an SFS server sometimes caused the external file handler (EXTFH) to destroy an open file descriptor (OFD). This resulted in a fatal error message similar to the following:
88047016 F Encina Internal Error -- Call your Support Representative: extfh: sfs_RestoreContext FAILED with status ENC-sfs-0059: Invalid OFD. 00000006 F .../pdg/2.0_ports/source/src/sfs/extfh/cobol_common.c 1330
The error message did not contain the name of the file in question, making it difficult to determine the cause of the error.
Solution: The fatal error message now contains the name of the file that caused the error.
Description: A race condition between an administration call executed from Enconsole and the return of another transactional remote procedure call (TRPC) sometimes caused a server to fail with a segmentation violation. The stack trace from the core file read as follows:
[1] admin_trpc_mgr_CallsInProgress( ) [2] op0_ssr( ) [3] rpc_cn_call_executor( )
Solution: TRPCs that are in the process of leaving the server are now ignored by an administration call if relevant information has already been freed.
Description: Tran-C implemented a fixed two-minute wait for another thread to abort a transaction. This produced the following fatal error if the timeout was exceeded:
2c301416 F tc_serial_AbortNamedTran: looped <N> times waiting for transfer of control for tid <tid>...
However, since RPC communications timeouts were possibly involved, no reasonable fixed timeout was applicable to all situations.
Solution: The fatal error has been changed to a warning, issued every five minutes, until the abort is complete.
Description: To provide more control over tuning the cleanup of TRPC handles used for the delivery of out-of-band messages, an enhancement was needed to control the maximum idle count and frequency of cleanup execution.
Solution: You can now control the frequency of cleanup execution by using the ENCINA_TRPC_CLEANUP_INTERVAL environment variable. Setting this environment variable determines how frequently TRPC cache cleanup is executed. Handles that have been idle too long or that have been marked invalid are removed from the cache by this cleanup processing. The default value is 30 seconds; changes must be written in microseconds (for example, 40000000 microseconds for a 40-second idle time).
You can also control the maximum idle count by using the ENCINA_TRPC_CLEANUP_MAX_IDLE_COUNT environment variable. Setting this environment variable determines when a handle used for TRPC out-of-band messages has been idle too long and should be removed from the cache. When a handle has been idle for the specified number of cache cleanup passes, it is removed from the cache. The default value is 20 seconds; changes must be written in microseconds (for example, 40000000 microseconds for a 40-second idle time).
Description: The enccp interface needed the same enhancements as those made to the emadmin interface in earlier patches. These enhancements allow you to issue a resync operation for restoring the cell repository after recovering from a failure.
Solution: The new attribute actionis now supported in enccp.
Description: The OTS sometimes failed with fatal errors if calls to the commit_subtransaction or rollback_subtransaction methods of the SubtransactionAwareResource function threw exceptions that were not caught by the application.
Solution: When unexpected exceptions are caught, a warning instead of a fatal error is now generated .
Description: When cold starting a previously defined MAS that was not shut down cleanly, the server terminated with a fatal error that included the status code DCE-rpc-0164 (rpc_s_entry_already_exists).
Solution: The server now ignores the status code DCE-rpc-0164 (rpc_s_entry_already_exists) when calling the rpc_ns_group_mbr_add function to add a processing agent PA to the server group.
Description: An unexpected abort sometimes occurred during a server-side transaction (SST) when the client made an SST to a recoverable server, which then transmitted the transaction to another server. This caused the transaction to abort with the following message:
ENC-trp-0035: The server-side transaction was aborted or took an exception (TRCP_SERVER_SIDE_ABORT)
This abort was performed erroneously because the tranGetLocalState function was returning an unexpected state (TRAN_LOCAL_STATE_PREPARING) for the transaction.
Solution: The tranGetLocalState function now checks for the resolved outcomes before checking isLoggingPrepare, because it is legitimate to be logging for the benefit of others, and yet be resolved locally.
Description: An uninitialized TRPC handle in the TRPC stub code sometimes caused a fatal error if an exception was raised before the handle was initialized.
Solution: Initialization is now performed correctly, and a fatal error no longer occurs.
Description: In OTS, the Current::commit (TRUE) function for subtransactions executed wait for heuristic information logic, which was inappropriate for subtransactions. This happened because the Tran::operator = function did not copy the isNested data member.
Solution: The heuristic information logic is now executed correctly.
Description: External Encina tracing did not support the special formatting character %k for translating error codes.
Solution: External Encina tracing now supports the special formatting character %k, thus making it easier for customers to generate meaningful trace messages.
Description: The OTS call to the replay_completion function was not nonblocking.
Solution: The OTS call to the replay_completion function is now nonblocking. The replay_completion work is now done in a background thread. This adjustment also helps to avoid recovery problems when the restarting server tries to access OTS resources.
Description: The OTS did not allow concurrent threads to work on behalf of the same transaction in interoperability mode.
Solution: The ResumeProxyTran and EndWorkOnForeignTran classes now use BeginWorkingOn and EndWorkingOn methods, instead of suspend and resume methods.
Description: An unexpected exception raised by the replay_completion function resulted in the following fatal error:
d41fb886 F Unexpected exception when replaying completion
Solution: All exceptions other than INV_OBJREF are now retried.
Description: If an RQS server deleted a fast local transport (FLT) handle and subsequently received an FLT call from a client, the call was delayed for 60 seconds (the default FLT timeout value) before being completed by a TRPC.
Solution: If an RQS server receives a call to an FLT handle that it does not recognize, it now notifies the calling client immediately, resulting in an immediate TRPC to complete the call, as well as the generation of a new FLT handle.
Description: This enhancement solves several problems involving the ENCINA_TPM_HANDLE_REFRESH_INTERVAL environment variable.
Solution: This change addresses these issues by allowing more latitude in setting the environment variable ENCINA_TPM_HANDLE_REFRESH_INTERVAL.
Now, if the environment variable ENCINA_TPM_HANDLE_REFRESH_INTERVAL is set, the mon_InitClient function does not attempt the RPC to the cell manager; instead, the specified value is used for the refresh interval.
However, if the previous behavior is desired, you can preserve it by using the existing Monitor API. If the cell manager is unavailable during initialization, the mon_InitClient function returns the status code MON_CELL_UNAVAILABLE. The application can then call the mon_SetHandleCacheRefreshInterval function to set the binding cache refresh interval to the desired value.
Note: | While the values for the cell attribute and the ENCINA_TPM_HANDLE_REFRESH_INTERVAL environment variable are specified in seconds, the value specified by the mon_SetHandleCacheRefreshInterval function is specified in minutes. |
Description: If a "bind by object reference" Object Transaction Service (OTS) RPC failed, subsequent calls to the trpc_GetManagerInfo function from outside a manager function returned the message TRPC_SUCCESS rather than the expected status TRPC_NOT_IN_MGR (ENC-trp-0039).
Solution: The correct status is now returned. If you have been using the following workaround, it is now safe to delete it.
The tidl compiler has been modified to generate the code shown in the following workaround:
In the client generated tidl stub code (*TC.C), replace the following code for each RPC:
Bind(callHandle_.tranInfoP, callHandle_.ifSpecP) ; with: try { Bind(callHandle_.tranInfoP, callHandle_.ifSpecP) ; } catch ( . . . ) { trpcStub_LeaveClientStub(&callHandle) ; throw ; }
On Windows NT, exception handling cannot coexist with DCE exception handling (which is structured) in the same function, so the code above should be moved into a static function such as:
static void LocalBind(OtsBinding *otsBn, trpcStub_call_t &callHandle_) { try { otsBn->Bind(callHandle_.tranInfoP, callHandle_.ifSpecP) ; } catch ( . . . ) { trpcStub_LeaveClientStub(&callHandle_) ; throw; } }
and the function called as follows:
LocalBind(this, callHandle_);
Description: When a client was making a large number of concurrent requests using the same RQS handle, and a communications error caused the FLT handle to be invalidated, a segmentation violation sometimes occurred. In a similar situation, a segmentation violation sometimes occurred with the FLT disabled.
Solution: Both of these problems have been corrected.
Description: There was no way to safely remove a logical volume from an SFS server, even if the volume was completely empty.
Solution: A new command, sfsadmin remove lvol, is now included in the sfsadmin command suite to enable you to remove a logical volume from an SFS server. It removes (disassociates) a logical volume from an SFS server. The syntax is as follows:
sfsadmin remove lvol [-server server_name] volume_name
Note: | This command is available only temporarily. It will not be available in the next release because it is superseded by the acquire/release vol support command. |
Arguments:
-server server_name-Specifies the name of the SFS server.
volume_name-Specifies the name of the logical volume to be removed.
Description:
Currently, an SFS server retains information about logical volumes that have been deleted by using the tkadmin delete lvol command. This behavior causes problems during server restarts and file creation. Query operations on the deleted volume also fail. The sfsadmin remove lvol command removes volume-related information from an SFS server so that after a logical volume is deleted by subsequent tkadmin commands, the server can continue to operate correctly.
Caution:
Before using the sfsadmin remove lvol command, you must make sure
that there are no pending transactions in the server. Removing a
logical volume while there are existing transactions can cause the server to
terminate abnormally and can cause problems during later restarts. Even
if existing transactions are accessing other SFS volumes (not targeted for
removal), it is safer to allow those transactions to complete as well.
Procedure:
To remove an SFS logical volume, perform the following steps:
Refer to the Encina Administration Guide Volume 1: Basic Administration for details on managing server transactions.
Caution:
After the sfsadmin remove lvol command completes, it is strongly
recommended that you stop the server and restart it before resuming normal
server operations. Doing any recovery work immediately following an
sfsadmin remove lvol command (without first stopping and restarting
the server) can cause the server to access the nonexistent volume.
(In-memory data structures are not updated until a shutdown and subsequent
warm start. Therefore, incorrect volume information can be
logged. This can cause problems in future server restarts.)
Note: | The names of logical volumes persist at the server, even after you use the
tkadmin delete lvol command to delete them. It is
recommended that you rename the deleted logical volume to clearly identify it
as the removed volume. Use the following sequence of commands:
|
Notes:
You can also safely issue additional sfsadmin remove lvol commands to remove multiple logical volumes.
Permissions:
Encina SFS administer (A) permission on the SFS server
Description: HP-UX patch PHSS_14920 introduced an incompatible NLS message change that caused a segmentation violation in enccp when an invalid command was entered. The relevant part of the stack trace was:
'main(29) : Called from: sprintf (hpux_export stub) +0018 (C012F258) 'main(28) : Called from: \\Tcl_Eval (000BE56C) 'main(27) : Called from: \\Tcl_CatchCmd (000C06FC) 'main(26) : Called from: \\CatchCmd (00028E88) 'main(25) : Called from: CatchCmd (hpux_export stub) (00028E38) 'main(24) : Called from: \\Tcl_Eval (000BE2B8) 'main(23) : Called from: \\InterpProc (000F4A1C)
Solution: A different string (presumably the same one used by the DCE libtcl) is now used to compile enccp, and the segmentation violation no longer occurs.
Description: Although DCE allows a configurable number of Remote Procedure Calls (RPCs) to be queued to a server, sometimes it is preferable to queue requests to the client to avoid flooding the RPC runtime and, potentially causing transient RPC failures in a highly-loaded system.
Solution: You can now use the new environment variable ENCINA_BINDING_MAX_CONCURRENT_RPCS_PER_PA to limit the number of concurrent RPCs sent by a single client to a single PA.
Description: When a recoverable application became unavailable, processing of transactions by other applications slowed down as TRPC threads became blocked trying to generate a new handle for the failed application.
Solution: When this situation occurs, the TRPC now dedicates one thread to generate a new handle to the failed application. If other threads encounter messages for the application, they move the messages to a queue. If the application restarts successfully, the queued messages are delivered; if the application does not restart, the queued messages are discarded.
Description: During a warm startup, the enccp and emadmin commands sometimes reported duplicate entries for the RQS with the following error:
ENC-eai-0091 ENCONSOLE_MULTIPLE_REFERENCES_TO_VOLUME
Solution: Duplicate entries for the RQS no longer occur.
Description: The OTS prematurely terminated the wait for heuristic damage reports when executing an after-resolution callback.
Solution: The after-finished callback is now used for reporting heuristic information, instead of the after-resolution callback.
Description: Sending more than five concurrent MAS start requests to a node sometimes caused a deadlock in the node manager. Each start request blocked until PA 0 sent an RPC to the node manager during its initialization, but the RPC could not be serviced if the entire default thread pool was consumed with MAS start requests.
Solution: There is now a separate internal thread pool for the monNmAppl interface. The default size is 1, and you can resize it by using the ENCINA_MON_INTERNAL_TPOOL_SIZE environment variable.
Description: In certain instances, the OTS idle server callback was not being executed. This caused the OTS server to become suspended after about five minutes (as soon as the OTS garbage collection executed) if the application's own garbage collection had released any objects. This happened because the waitingThreadCount function was not properly maintained.
Solution: The waitingThreadCount function is now properly maintained so that the idle server callback function is executed appropriately.
Description: A change was introduced in Encina 2.5 and retrofitted as a patch to Encina 2.0 and 1.1 to add support for disk partitions larger than 4 GB.
However, this fix had a negative effect on I/O performance for the SFS and the RQS servers. If, after a successful file open operation, the file descriptors (FD) limit was encountered while trying to perform 64-bit I/O to the file in a new thread, improper locking sometimes occurred. This locking led to a process suspension in which several threads were blocked.
Solution: This problem no longer occurs because the locking has been corrected.
Description: If an OTS subordinate server terminated at commit time, during a call to tran_ProvideOutcome that was triggered by a call to ProxyTran_i::commit from a superior coordinator, the restarted server was unable to repeat the call to ProxyTran_i::commit because tran_ProvideOutcome failed.
Solution: The process no longer depends on tran_ProvideOutcome succeeding after the failure. It now relies on TRAN to commit the local transaction, or the superior server to repeat the Resource::commit call.
Description: The OTS set the branch qualifier length, bqual_length, of the otid_t structure to one less than it should have been.
Solution: The branch qualifier length is now set correctly and the nesting model code is now included in the branch qualifier.
Description: When an RQS client request failed due to a communications error with the server, a warning was issued that contained the following misleading status:
RPC failed. Can't talk to server : <serverName>. Reason : ENC-trp-0029 RPC failed for unknown reasons (most likely that DCE cannot pass right status)
Solution: The warning now includes an appropriate DCE status that corresponds to the exception raised by the RPC runtime.
Description: The OTS function register_subtran_aware was not throwing the switch NotSubTransaction for top-level transactions.
Solution: The problem has been corrected; NotSubTransaction is now thrown for top-level transactions.
Description: Although use of the TMXA_SERIALIZE_ALL_XA_OPERATIONS environment variable should allow a suspended association to be resumed in any Encina thread in a given process, the program checked to ensure that the same thread that suspended the association was used to resume it.
Solution: Now when the TMXA_SERIALIZE_ALL_XA_OPERATIONS environment variable is used, the whole process is considered to be the thread of control, and a suspended transaction can be resumed from any thread.
Description: During restart processing, TRAN did not drop family locks properly and unlock-time finalizations were deferred, causing problems for the OTS.
Solution: TRAN now drops family locks as each new transaction is processed during restart.
Description: The state none returned by the tran_GetLocalState function was sometimes misleading. For example, the state none was returned in the following situations:
Solution: When a transaction has never been active, the tran_GetLocalState function now returns the subtree commitment if it is available; only when it is unavailable does the tran_GetLocalState function return the state none.
Description: You were unable to use the ENCINA_BINDING_FILE environment variable if you chose to use Encina++ without using the DCE cell directory service (CDS).
Solution: The following instructions give the steps necessary to use ENCINA_BINDING_FILE environment variable with Encina++, if you choose not to use CDS:
In an Encina++ Toolkit environment using CDS, the following environment variables must be set for both the client and the server:
% setenv ENCINA_CDS_ROOT /.:/cdsRoot % setenv ENCINA_OTS_TK_MODE 1
(The ENCINA_TK_MODE environment variable can also be used, but it will be made obsolete by the above variable.)
Additionally, the following variable must be set for the Encina++ Toolkit server:
If the server is not recoverable:
% setenv ENCINA_OTS_TK_SERVER_ARGS serverName=serverName
If the server is recoverable:
% setenv ENCINA_OTS_TK_SERVER_ARGS "servername=serverName \ restartString=restartFile1:restartFile2 logDevice=/dev/rdsk/c0t1d0s3"
In an Encina++ Toolkit environment without CDS, the following environment variables must be set (in addition to the ones above) for both the client and the server:
% setenv ENCINA_BINDING_FILE bindingFilePath
The Encina++ binding model supports four ways of binding to a server-side object:
Binding by object reference does not require a lookup because you already possess the required binding. For the other binding modes, the binding file must contain an appropriate entry.
When binding by interface, the entry $ ENCINA_CDS_ROOT/interface/interfaceName must exist in the binding file, listing the binding string for the server that exports this interface. When binding by server name, there must be an entry for $ENCINA_CDS_ROOT/server/serverName. When binding by object, there must be an entry for $ENCINA_CDS_ROOT/object/objectName.
Consider the following example:
Two servers, S1 and S2, are running on machines named siam and kramer, respectively. Server S1 exports interface I1 and named objects 011 and 012. Server S2 exports interface I2 and the named object 021. The binding files should contain entries similar to the following:
/.:/cdsRoot/server/S1 ncadg_ip_udp : siam [2021] /.:/cdsRoot/interface/I1 ncadg_ip_udp : siam [2021] /.:/cdsRoot/object/011 ncadg_ip_udp : siam [2021] /.:/cdsRoot/object/012 ncadg_ip_udp : siam [2021] /.:/cdsRoot/server/S2 ncadg_ip_udp : kramer [2042] /.:/cdsRoot/interface/I2 ncadg_ip_udp : kramer [2042] /.:/cdsRoot/object/021 ncadg_ip_udp : kramer [2042]
Description: When well-known endpoints (WKE) were used with an Encina binding file and the ENCINA_REGISTER_WKES environment variable was used to force registration with the DCE endpoint mapper, multiple Encina servers running on a single machine overwrote one another's entries in the endpoint map.
Solution: Multiple Encina servers running on a single machine no longer overwrite one another's entries in the DCE endpoint map.
Description: The field names in the TransIdentity structure were erroneously listed as coordinator and terminator, terms that were not permitted in the CORBA IDL. (The CORBA IDL does not permit coordinator or Coordinator.)
Solution: The field names in the TransIdentity structure are now listed as coord and term, respectively.
Description: The monReserve_GetPaReservationStatus function showed the value LONG_TERM_RESERVED for short-term reservations and RESERVED when long-term reservations were used.
Solution: The monReserve_GetPaReservationStatus command now shows the appropriate values for each type of reservation.
Description: Encina incorrectly denied access to services due to an invalid ACL check when the principal included only the primary group ACL (such as when the inprojlist no option was specified) in the DCE Privilege Attribute Certificate (PAC). Encina incorrectly tested for a non-nil group universal unique identifier (UUID) when it should have been testing for a nil group UUID.
Solution: Encina now correctly tests for a nil group UUID and properly evaluates any non-nil primary group ACL.
Description: As originally implemented, when a TRPC server-side transaction aborted, the RPC was always terminated via the exception TRPC_SERVER_SIDE_ABORT. However, applications sometimes needed to be able to return out parameters on the RPC, even though the transaction was aborted, just as the TX transaction-demarcation specification allows.
Solution: This enhancement introduces a new TRPC server-side support function, trpc_ServerSideIgnoreAbort, which can be called from within the scope of a manager function to request that the TRPC stub code does not terminate the transaction, assuming that the TRPC otherwise terminates normally. Instead, the RPC completes normally, so that out parameters must be returned to the client. (Any abort reason is still stored at the server and retrieved by the client, so the trpc_serverSideAbortReason command can still be called to obtain the reason for a server-side abort.)
A new TRPC status code, TRPC_MGR_ABORTED_RPC_OK, is either returned or raised, depending upon whether the application includes a parameter of type trpc_status_t for the TRPC, so that the client application can determine if the out parameters are valid and handle them appropriately.
Description: If the login context expired while Enconsole was dormant, the pop-up menu telling the user to log in often became visible after the user had already begun to log in.
Solution: If Enconsole is dormant when the login context expires, the title bar is now updated to indicate login expired; no pop-menu appears.
Description: If a client used the implicit transaction mode to call a server the first time, and the transaction was aborted by the OtsAdmin::Tran::Rollback function while the transaction mode call was in progress, the client was able to successfully commit a transaction that should have been aborted.
Solution: The problem has been corrected by adding and improving error checking after the various TRAN communications service calls.
Description: If a PPC Gateway server experienced some apparently transient errors, for example while trying to establish a TCP connection, it retried the connection up to four time at two-second intervals. However, some users needed to fine tune the retry settings.
Solution: To accommodate this need, two new environment variables have been added:
Description: The cell manager limited the number of concurrent tasks to three. (A task is queued for execution for each transactional update, such as start or stop requests and repository updates.) Having a fixed limit of three tasks sometimes caused a problem when long-running tasks, such as start requests, were executed. For example, a start request could be executed continuously because the server was exiting during initialization (because of an unavailable resource) and the number of restart attempts was intentionally set very high. Once three such starts were in progress, no other tasks could be executed by the cell manager.
Solution: The cell manager now uses a new environment variable, ENCINA_MON_TASK_TPOOL_SIZE, to determine the preferred size of the pool of threads used to execute tasks. Additionally, the thread pool is allowed to grow to four times the preferred size. Therefore, even though the default value is still 3, a maximum of 12 tasks can be executed concurrently. If additional capacity is desired, you can set the environment variable to the preferred number of threads.
Description: Users desired more control over the values used for refreshing the handle cache maintained by the PPC scheduling code. This includes control over:
Solution: You can now adjust these values by using the following environment variables:
Description: The sfs_DeleteRange function acquired a lock on each record that it processed. Even if the record was then determined to be out of range, the lock was not released. This sometimes resulted in a deadlock in special situations within CICS.
Solution: When it is detected that the current key is out of range and the OFD is using the TranCursorStability descriptor, the extra key lock is now dropped.
Description: When importing one Data Definition Language (DDL) file from another DDL file, the generated include function was missing the .H extension. For example it was written as #include foo rather than #include foo.H.
Solution: The proper .H extension is now generated by DDL.
Description: If Tran-C calls were made without first properly initializing Tran-C, the following fatal error message was displayed:
08353426 F Encina Internal Error - - Call your Support Representative: pdg/2.5/source/src/client/bde/dce/bde_thread.c: 1753: System call failure: pthread_getspecific, errno 22 (Invalid argument) [0x0]
Solution: A more appropriate and helpful error message is now displayed:
2c340816 F Transactional-C has not been properly initialized
Description: The OTS aborted transactions if an exception was used on a request to a transactional object, even if it was a user exception.
Solution: A new environment variable, ENCINA_OTS_NO_ABORT_ON_USER_EXCEPTION, can now be set to allow user exceptions. The default value is FALSE.
Description: A workaround for a DCE defect resulted in an extra timeout when Encina tried to determine if a server was running.
Solution: The DCE defect has been corrected, and Encina now quickly determines if a server is running.
Description: If a node manager's handle to the cell manager became invalid for any reason other than a failure of the cell manager, or if the node manager was not notified when the cell manager had restarted, the node manager was unable to send pings. As a result, the cell manager sometimes erroneously reported that a node manager was not operational, due to the way the liveness monitoring package (lmp) dealt with invalid handles.
Solution: The lmp now resets the binding when the ping RPC fails.
Patch 1 consisted of the GA distribution of TXSeries Encina 4.2.