FixPak 9
© Copyright International Business Machines Corporation 1999. All rights reserved.
© Copyright International Business Machines Corporation 1999. All rights reserved.
Note to U.S. Government Users -- Documentation related to restricted rights -- Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule contract with IBM Corp.
4.0 Building Applications for Windows and OS/2 Environments
5.0 Building Applications for UNIX Environments
8.0 Embedded SQL Programming Guide
9.0 Installing and Configuring DB2 Clients
12.0 Quick Beginnings for File Manager
13.0 Quick Beginnings Changes Documented Before Version 5.2
14.0 Replication Guide and Reference
15.0 Road Map to DB2 Programming
17.0 System Monitor Guide and Reference
19.0 HTML Search Server for OS/2: Searching HTML Documentation
20.0 HTML Search Server for Windows NT, Windows 95, and Windows 98: Searching HTML Documentation
Note: | These Release Notes may include DB2 information about issues discovered after the shipped README file was published. |
This file contains information for the release of FixPak 9 that was not available when the DB2 manuals were issued. It consists of the README file issued with the release of DB2 Version 5.2 as revised with this new information.
This file includes information on the following products:
IBM DB2 Universal Database Enterprise Edition, Version 5.2 IBM DB2 Universal Database Enterprise - Extended Edition, Version 5.2 IBM DB2 Universal Database Personal Edition, Version 5.2 IBM DB2 Universal Database Workgroup Edition, Version 5.2 IBM DB2 Personal Developer's Edition, Version 5.2 IBM DB2 Universal Developer's Edition, Version 5.2
Note: | VisualAge for Basic and Lotus Approach are no longer bundled in the product boxes. |
A separate Release Notes file, installed as READDCS.TXT, is provided for the following products:
IBM DB2 Connect Enterprise Edition, Version 5.2 IBM DB2 Connect Personal Edition, Version 5.2
Some books in the DB2 Universal Database library have not been updated for Version 5.2. These include:
The What's New book contains both an overview of some of the major DB2 enhancements for Version 5 and Version 5.2, as well as a detailed description of Version 5.2 enhancements, especially for the books not updated for Version 5.2. New features that were available as part of DB2 Universal Database Version 5 and DB2 Connect Version 5 FixPaks are considered Version 5.2 enhancements.
To get the full DB2 information on a topic you are looking for, use the appropriate book in the DB2 library in conjunction with the What's New book, as well as the information in this README file.
This section describes the main changes for FixPak 9 and points you to where these changes are documented.
Change Password Enablement for EEE
DB2 Administrators may now allow others to change passwords on AIX and NT EEE systems through the profile registry variable DB2CHGPWD_EEE=<boolean>. For full details, see section 2.2.2 , "Change Password Enablement for EEE".
DRDA-only Client Authentication
The DB2 UDB server can now distinguish between DRDA clients from DB2 for MVS and OS/390, DB2 for VM and VSE, and DB2 for OS/400, and other clients. This allows you to protect against these other clients. The specified clients above can be trusted to perform client-side authentication, and you can now also determine where these clients are authenticated. For full details, see section 2.2.1 , "DRDA-only Client Authentication".
Encrypted Passwords
A new database manager configuration value, SERVER_ENCRYPT, has been added for the parameter AUTHENTICATION. SERVER_ENCRYPT now has the same semantics as SERVER authentication currently has, with the added feature that any passwords sent over a network will be encrypted. For full details, see section 2.2.3 , "Encrypted Passwords".
JDBC 2.0
The JDBC 2.0 driver shipped with this FixPak supports the JDBC 2.0 core API. However, due to the unavailability of certain features of the DB2 Engine, not all features defined in the specification are supported. For full details, see section 8.7 , "JDBC 2.0".
NetQuestion Workarounds
This section addresses:
For full details, see section 21.0 , "NetQuestion Workarounds".
Summary Table Support
Enhancements have been made to summary table support. The REFRESH IMMEDIATE option is now available. For full details, see section 16.2 , "ALTER TABLE", and section 16.5 , "CREATE TABLE".
Unicode/UCS-2 and UTF-8 Support
The Unicode character encoding standard is a fixed-length, character encoding scheme that includes characters from almost all the living languages of the world. ISO and IEC 10646 standard (ISO/IEC 10646) specifies the Universal Multiple-Octet Coded Character Set (UCS) that has a 2-byte version (UCS-2) and a 4-byte version (UCS-4). DB2 UDB supports UCS-2, that is, Unicode without surrogates.
With UCS-2 or Unicode encoding, ASCII and control characters are also two bytes long, and the lead byte is zero. This could be a major problem for ASCII-based applications and ASCII file systems because in UCS-2 strings extraneous NULLs may appear anywhere in the string. A transformation algorithm, known as UTF-8, can be used to circumvent this problem for programs that rely on ASCII code being invariant. DB2 implements UTF-8.
For full details, see section 2.15 , "Unicode/UCS-2 and UTF-8 Support in DB2 UDB".
Using Virtual Interface (VI) Architecture
DB2 UDB now supports two implementations of the VI architecture: One from GigaNet and the other from ServerNet. For full details, see section 2.16.3 , "Running DB2 UDB V5.2 for Windows NT with GigaNet Interconnect".
DB2 UDB Version 5.2 can be purchased and installed separately, or it
can be installed as a FixPak on top of DB2 UDB Version 5.0. You
can upgrade Version 5.0 to Version 5.2 with FixPak 6. The
PTF numbers for FixPak 6 are listed in the following table by operating
system:
Table 1. PTF Numbers for DB2 UDB FixPak 6
Operating System | PTF Number for DB2 UDB FixPak 6 |
---|---|
AIX | U459852 |
HP-UX 10 | U459854 |
HP-UX 11 | U459872 |
OS/2 | WR09073 |
SCO UnixWare 7 | U459870 |
Silicon Graphics IRIX | U459871 |
Solaris | U459853 |
Windows 95, Windows 98, and Windows NT | WR09074 |
Windows 3.1 and Windows for Workgroups 3.11 | WR09076 |
Note: | Running the DB2 File Manager on AIX Version 4.2 requires a minimum of PTF U456886. |
The following PTF numbers, from SCO, are required for DB2 UDB Version 5.2 to run on SCO UnixWare 7:
ptf7003c ptf7010 ptf7013
For where to find the latest information about SCO PTFs required for DB2 UDB Version 5.2, please see section 1.3 , "Product and Service Technical Library".
The following patch levels are required for running the Sun Cluster software:
105786-05 - Required patch for Sun Cluster 2.x on Solaris 2.6. 2.1_105458-06 - Required Sun Cluster patch level for DB2.
This FixPak cannot be installed on top of existing DB2 UDB Version
5.0 code. You must be at the DB2 UDB Version 5.2
level. In order to get to the Version 5.2 level, you can
download and install FixPak 6. Once FixPak 6 is installed, you can then
apply FixPak 9.
Table 2. PTF Numbers for DB2 UDB FixPak 9
Operating System | PTF Number for DB2 UDB FixPak 9 |
---|---|
AIX | U465091 |
HP-UX 10 | U465093 |
HP-UX 11 | U465094 |
OS/2 | WR21112 |
SCO UnixWare 7 | U465095 |
Silicon Graphics IRIX | U465096 |
Solaris | U465092 |
Windows 95, Windows 98, and Windows NT | WR21113 |
Windows 3.1 and Windows for Workgroups 3.11 | WR21114 |
For complete, up-to-date DB2 information, including information on issues discovered after this README file was published, use the Product and Service Technical Library on the World Wide Web:
http://www.software.ibm.com/data/db2/library
In Version 5.2, most of the books that make up the product libraries have been updated. The DB2 Library books are available online in HTML format. Most are also available in PDF and postscript format. Refer to section 1.3 , Product and Service Technical Library above for where to access the books online. See the What's New book for the complete list of updated books, instructions for printing them, and the form numbers that you can use to order them. You can use form number SBOF-8921-00 to order the complete set of the books that were updated for Version 5.2.
FixPak information that was previously included in files read1297.txt, read0398.txt, and read0698.txt has been included:
In addition, information that was not available when the books were printed is included in these Release Notes.
The Web Control Center is a Java version of the DB2 Universal Database Control Center (a database administration tool). Its design is modelled on the original DB2 Control Center, yet it provides a more flexible network-centric administration environment. The Web Control Center is implemented as a Java applet that uses DB2's JDBC support.
The FixPak will not install the Web Control Center but will upgrade an existing version. To obtain the Web Control Center if you do not have it, download it from the following URL:
http://www.software.ibm.com/data/db2/udb/webcc
For the latest information about the DB2 family of products, you may subscribe to the "DB2 Family Newsletter" (English only). Simply send your request in a FAX to 1-905-316-4733, and include the following information:
Name Company name Full mailing address Telephone number DB2 products you currently use
In general, a summary table or a replicated table is used for optimization of a query if the isolation level of the summary table or the replicated table is higher than or equal to the isolation level of the query. For example, if a query is running under the cursor stability (CS) isolation level, only summary tables and replicated tables that are defined under CS or higher isolation levels are used for optimization.
The REFRESH IMMEDIATE option is now supported for summary tables. For more information, please see section 16.2 , "ALTER TABLE", and section 16.5 , "CREATE TABLE".
This section describes how to optimize performance when data is placed on Redundant Array of Independent Disks (RAID) devices. In general, you should do the following for each table space that uses a RAID device:
DB2_PARALLEL_IO
When reading data from, or writing data to table space containers, DB2 may use parallel I/O if the number of containers in the database is greater than 1. However, there are situations when it would be beneficial to have parallel I/O enabled for single container table spaces. For example, if the container is created on a single RAID device that is composed of more than one physical disk, you may want to issue parallel read and write calls.
To force parallel I/O for a table space that has a single container, you can use the DB2_PARALLEL_IO registry variable. This variable can be set to "*" (asterisk), meaning every table space, or it can be set to a list of table space IDs separated by commas. For example:
db2set DB2_PARALLEL_IO=* {turn parallel I/O on for all table spaces} db2set DB2_PARALLEL_IO=1,2,4,8 {turn parallel I/O on for table spaces 1, 2, 4, and 8}
After setting the registry variable, DB2 must be stopped (db2stop), and then restarted (db2start), for the changes to take effect.
DB2_STRIPED_CONTAINERS
Currently when creating a DMS table space container (device or file), a one-page tag is stored at the beginning of the container. The remaining pages are available for data storage by DB2, and are grouped into extent-sized blocks.
When using RAID devices for table space containers, it is suggested that the table space be created with an extent size that is equal to, or a multiple of, the RAID stripe size. However, because of the one-page container tag, the extents will not line up with the RAID stripes, and it may be necessary during an I/O request to access more physical disks than would be optimal.
DMS table space containers can now be created in such a way that the tag exists in its own (full) extent. This avoids the problem described above, but it requires an extra extent of overhead within the container. To create containers in this fashion, you must set the DB2 registry variable DB2_STRIPED_CONTAINERS to "ON", and then stop and restart your instance:
db2set DB2_STRIPED_CONTAINERS=ON db2stop db2start
Any DMS container that is created (with CREATE TABLESPACE or ALTER TABLESPACE) will have new containers with tags taking up a full extent. Existing containers will remain unchanged.
To stop creating containers with this attribute, reset the variable, and then stop and restart your instance:
db2set DB2_STRIPED_CONTAINERS= db2stop db2start
The Control Center and the LIST TABLESPACE CONTAINERS command will not show whether a container has been created as striped or not. They will continue to use "file" or "device", depending on how the container was created. To verify that a container was created as striped, you can use the /DTSF option of DB2DART to dump table space and container information, and look at the type field for the container in question. Also, the query container APIs, sqlbftcq( ) and sqlbtcq( ), can be used to create a simple application that will display the type.
Definitions for these new types have been added to the sqlutil.h header file:
#define SQLB_CONT_STRIPED_DISK 5 /* DMS: Striped disk */ #define SQLB_CONT_STRIPED_FILE 6 /* DMS: Striped file */
In the section "Selecting an Authentication Method for Your Server", two new paragraphs are added:
To protect against all clients except DRDA clients from DB2 for MVS and OS/390, DB2 for VM and VSE, and DB2 for OS/400, set the trust_allclnts parameter to DRDAONLY. Only these clients can be trusted to perform client-side authentication. All other clients must provide a user ID and password to be authenticated by the server.
The trust_clntauth parameter is used to determine where the above clients are authenticated: if trust_clntauth is "client", authentication takes place at the client. If trust_clntauth is "server", authentication takes place at the client when no password is provided and at the server when a password is provided.
DB2 Administrators may now allow others to change passwords on AIX and Windows NT EEE systems through the profile registry variable DB2CHGPWD_EEE=<boolean>.
Note: | This is not supported on Sun EEE systems. |
The default for this variable is NOT SET (disabled). Other values for DB2CHGPWD_EEE are the standard boolean values used by other DB2 profile variables.
The DB2 Administrator is responsible for ensuring that the passwords for all nodes are maintained centrally using either a Windows NT Domain Controller on Windows NT, or NIS on AIX.
Note: | If the passwords are not maintained centrally, enabling the DB2CHGPWD_EEE variable will allow for the possibility that passwords may not be consistent across all nodes. That is, if a user uses the "change password" feature, then the user's password will only be changed at the node to which they connect. |
A new DBM CFG (database manager configuration) value SERVER_ENCRYPT has been added for the parameter AUTHENTICATION. SERVER_ENCRYPT now has the same rules as SERVER authentication, with the added feature that any passwords sent over a network will be encrypted. Also, a new value DCS_ENCRYPT has been added for DB2 Connect support. This value is documented in the file READDCS.TXT, available with the following products:
IBM DB2 Connect Enterprise Edition, Version 5.2 IBM DB2 Connect Personal Edition, Version 5.2
The same parameter has been added to the CATALOG DATABASE authentication clause as an option.
Restrictions
DB2 Universal Database on AIX has added the functionality to log failed password attempts with the operating system and detect when a client has exceeded the number of allowable login tries as specified by the loginretries parameter.
There is no audit support for summary tables, typed tables, or replicated tables.
If you want to load or import an IXF format file containing DBCS data, you must have the corresponding conversion files (located in sqllib\conv) installed on your client machine. The names of these conversion files contain both the source and the target code page numbers; the extension is always .cnv. For example, file 09320943.cnv contains the conversion table for converting code page 932 to 943.
If your client machine does not have the appropriate conversion files, you can copy them from a server machine to the sqllib\conv directory of the client machine. Be sure to copy the files from a compatible platform; for example, if the client is running on a UNIX based operating system, copy the files from a server that is also running on a UNIX based operating system.
In the section "Overview of the LOAD Process", a revision to the paragraph discussing pending states is made:
After the LOAD process completes, if:
then any associated table spaces are placed in a backup pending state.
In the section "LOAD Exception Table", two new paragraphs are added:
LOAD, when using the FOR EXCEPTION option, will check for rows that have unique index violations. LOAD will not check for constraint or foreign key violations other than for unique constraint violations.
A "unique key" is a key that is constrained so that no two of its values are equal. The mechanism used to enforce the constraint is called a unique index. A primary key is a special case of a unique key. A table cannot have more than one primary key.
In DB2 UDB Version 5.2 for Windows NT and DB2 UDB Version 5.2 for OS/2, users operating in an SMP environment might occasionally experience a system hang during a load operation. This is a known problem that occurs only during index creation (the BUILD phase of the load utility) involving large volumes of data (on the order of several gigabytes). To avoid this problem, it is recommended that existing indexes on a loading table be dropped before issuing the LOAD command, and recreated using the CREATE INDEX statement after the load operation has completed successfully. An alternative that does not require the dropping of indexes is to perform the load operation into a staging table that does not have any indexes defined on it. The target table can subsequently be updated by issuing an INSERT statement encompassing all of the data from the staging table.
The following applies to an error scenario for the autoloader program on IBM DB2 Universal Database Enterprise - Extended Edition for Windows NT. It explains the symptom and nature of the problem, and also a workaround.
When running the autoloader program, db2atld.exe, on a multi-homed machine, that is, a machine with multiple network cards installed, ensure that the machine is configured correctly by trying the following operations:
On Windows NT machines, the IP address returned for a local host name is not retrieved from the DNS or hosts file, but from information configured locally in the Control Panel network icon.
There is a bug in Windows NT 4.0 where the binding order configured in the Control Panel network icon does not influence the IP address order returned on a multi-homed machine. See Microsoft Support Online article ID Q171320 to resolve this problem. (When you ping a host name, it is the first IP address that is returned.)
The concept of a local database connection has been extended for the MPP environments to include connections from any node of a given MPP instance. That means that even though the instance is configured using AUTHENTICATION server, a password will no longer be required if a connection is being attempted from one of the nodes defined in the db2nodes.cfg file. The Autoloader will make use of this new connection behavior when the AUTHENTICATION flag in the autloader.cfg file is not set or is set to NO, and the password for autoloader is not defined.
With this new database connection behavior, the password is only mandatory for autoloader execution if a password is required for remote execution of programs in your system. For example, if the .rhosts file on a UNIX system was not set up properly to enable rsh execution, then the password is required.
When exporting BIGINT or DECIMAL columns to WSF (Work Sheet Format), only values that fall in the range of type DOUBLE can be exported accurately. Values that do not fall in this range will still be exported to the target media. However, loading back these values (via LOAD or IMPORT) may result in incorrect data. The end result will vary by platform.
You cannot back up a database or table space on one operating system and restore it to another operating system by using the BACKUP DATABASE and RESTORE DATABASE commands. Instead, you move tables from one operating system to another by using the db2move command (or by exporting, then importing or loading the data). For more information, see "Moving Data Using the db2move Tool" in the Administration Guide.
The current ADSM client on the Windows operating system and OS/2 is non-reentrant, and so multiple sessions cannot be created with the backup, restore, or load utilities from a single machine.
In a single node configuration, if a user attempts to issue a backup command such as:
db2 backup db sample use adsm open 3 sessions
DB2 will detect that multiple sessions are not supported by ADSM, and will return SQL2032N. The equivalent scenario also applies to load copies using ADSM.
However, in an MLN configuration on Windows NT, DB2 may not be able to detect the use of multiple sessions on a single machine if each logical node attempts to create only one session. If multiple logical nodes are being backed up, restored, or loaded in parallel using ADSM, DB2 will allow the operation to proceed if each node attempts to use a single session, even though the logical nodes actually reside on the same physical hardware. This can lead to failed backup attempts, and hung load processes, and should not be attempted.
The following information should be added:
The total log file size limit is 4 GB. That is, the number of logfiles (LOGPRIMARY + LOGSECOND) multiplied by the size of each logfile in bytes (LOGFILSIZ * 4096) must be less than 4 GB.
In the section on "Frequency of Backups and Time Required", the following information should be added:
If the amount of time needed to apply archived logs when recovering and rolling forward a database is a major concern, then you will need to consider the cost of having more frequent backups. If your database is very active, more logging occurs and therefore more frequent database backups are recommended. More frequent database backups reduces the number of archived logs you need to apply when rolling forward through archived logs.
You cannot roll forward a partitioned database from a Version 2 client.
Within the sub-section 'Using the DB2 Universal Database XA Switch' under the 'XA Function Supported' section in this chapter, the last line in "OS/2 Platform" should be changed to:
With either method, you must link your application with db2api.lib.
The last line in the "Windows NT Platform" section should be changed to:
With any of these methods, ensure that you link with db2api.lib.
References to "db2app.lib" should be changed to "db2api.lib".
In the section on "Locking", just before the "Attributes of Locks" section, the following paragraph should be added:
You may want to consider defining a monitor that will record when deadlocks occur. Use the CREATE EVENT MONITOR statement described in the SQL Reference to create the monitor.
In the section on "Using a select-statement", the OPTIMIZE FOR and FETCH FIRST example should be modified to show the following:
SELECT EMPNAME, SALARY FROM EMPLOYEE ORDER BY SALARY DESC FETCH FIRST 100 ROWS ONLY OPTIMIZE FOR 20 ROWS
The change is simply the reversal of the order of the last two lines of the example.
In the section on "Hash Join", the following information should be added:
To realize the performance benefits of hash join, it may be necessary to change the value of the sortheap database configuration parameter, and the sheapthres database manager configuration parameter.
For decision support queries, hash join access plans use more sort heap space than do non-hash join plans. When sheapthres is set to be very close to sortheap (that is, less than a factor of two or three per concurrent query), a hash join runs with much less memory than the optimizer anticipated. When executing with limited memory, hash joins can be very slow. The problem occurs in queries with multiple sorts and hash joins, in which the first sorts or hash joins acquire most of the available memory.
The solution is to configure sheapthres to be large enough (relative to sortheap).
The following information should be added:
The storage associated with the buffer pool is allocated when a database is activated or when the first application connects to the database. Applications are the primary beneficiaries of the buffer pool; once applications are all disconnected, the storage associated with the buffer pool is de-allocated.
When working on your database design, you may have determined that tables with 8KB page sizes were best. As a result, you should create a buffer pool with an 8KB page size (along with one or more table spaces with the same page size).
If you have chosen to place your tables in an 8KB table space, it is recommended that your temporary table spaces have an 8KB page size as well, and they should be assigned to the same 8KB page size buffer pool. The default buffer pool (IBMDEFAULTBP) or any other 4KB buffer pool to which any 4KB temporary table spaces are assigned should be kept small, since the database manager will favor the 8KB temporary table spaces, and therefore the 8KB buffer pool.
In the section on "The Governor Deamon" add the following information:
The note concerned with using the governor as an alternate means to adjust agent priorities should have the following information added: "(This note does not apply to OS/2 or Windows NT platforms.)"
In the section on the "Governor Configuration File", replace the first and second paragraphs of the information under "action/priority" with the following:
Specifies a change to the priority of agents working for the application. Valid values are from -20 to +20.
The AUTHENTICATION type (authentication) parameter is updated as follows:
A new value, SERVER_ENCRYPT, has been added to this parameter. This value has the same rules as SERVER authentication, with the added feature that any passwords sent over a network are encrypted. For restrictions on the use of this value, see section 2.2.3 , "Encrypted Passwords".
A new value, DCS_ENCRYPT, has been added to this parameter. This value has been added for support of DB2 Connect. Details on this new value are documented in the READDCS.TXT file for the following two products:
To realize the performance benefits of hash join, it may be necessary to change the value of the SORTHEAP database configuration parameter, and the SHEAPTHRES database manager configuration parameter.
Hash join plans use more sort heap space than other join plans. When SHEAPTHRES is set to be very close to SORTHEAP (less than a factor of 2 or 3 per concurrent query), hash join runs with much less memory than the optimizer anticipated. When executing with limited memory, hash join can be very slow. The problem occurs in queries with multiple sorts and hash joins, in which the first sorts or hash joins acquire most of the available memory.
The solution is to configure SHEAPTHRES to be large enough (relative to SORTHEAP).
For more information about hash join, including how to determine whether a hash join strategy has been selected, see the description of db2expln in the SQL Reference, and the What's New book (Appendix A - System Monitor Guide and Reference Updates). See also the description of the new hash join registry variable (DB2_HASH_JOIN) below.
The "Size of Log Files (logfilsz)" parameter is updated as follows:
The total log file size limit is 4 GB. That is, the number of logfiles (LOGPRIMARY + LOGSECOND) multiplied by the size of each logfile in bytes (LOGFILSIZ * 4096) must be less than 4 GB.
The section of the chapter dealing with "Instance Administration", the "Trust All Clients (trust_allclnts)" parameter is updated as follows:
The Range of values for the parameter now includes DRDAONLY in addition to NO and YES.
The description for the parameter now includes the following two paragraphs:
Setting the trust_allclnts parameter to DRDAONLY protects against all clients except DRDA clients from DB2 for MVS and OS/390, DB2 for VM and VSE, and DB2 for OS/400. Only these clients can be trusted to perform client-side authentication. All other clients must provide a user ID and password to be authenticated by the server.
The trust_clntauth parameter is used to determine where the above clients are authenticated: if trust_clntauth is "client", authentication takes place at the client. If trust_clntauth is "server", authentication takes place at the client when no password is provided and at the server when a password is provided.
DB2 Universal Database on AIX has added the functionality to log failed password attempts with the operating system and detect when a client has exceeded the number of allowable login tries as specified by the loginretries parameter.
DB2 can fail while fetching query results from large tables (that is, tables with greater than a 4005-character record length and more than 500 columns). This can be overcome by increasing the value of the ASLHEAPSZ database manager configuration parameter from the default value of 15.
On Windows NT, on an SMP machine, the INTRA_PARALLEL database manager configuration parameter will be set to YES when an instance is created with DB2 Version 5.2. With Version 5.0, it would have been set to NO.
The section "Cluster Configuration", has an example showing how to create containers for a two-node system that is updated as follows:
CREATE TABLESPACE TS3 MANAGED BY SYSTEM USING ('/TS/cont $N%2, '/TS3/cont $N%2+2')
The following containers would be used:
/TS3/cont0 - on Node 0 /TS3/cont2 - on Node 0 /TS3/cont1 - on Node 1 /TS3/cont3 - on Node 1
The change was made to be consistent with the 'TS3' subdirectory.
Parameter Operating Values Description System ------------------------------------------------------------------------------------- DB2CHGPWD_EEE AIX and Default=NULL DB2 Administrators can allow non- Windows NT Values: YES or NO administrators to change passwords. (not supported The administrator is responsible for on Sun EEE) ensuring that the passwords for all database partitions (nodes) are maintained centrally using either a Windows NT Domain Controller on Windows NT, or NIS on AIX. If not maintained centrally, passwords may not be consistent across all database partitions. This could result in a password only being changed at the database partition to which the user connects to make the change.
Parameter Operating Values Description System ------------------------------------------------------------------------------------- DB2_GRP_LOOKUP Windows NT DEFAULT=null Specifies which Windows NT Values: local, domain security mechanism will be used to enumerate the groups that a user belongs to.
Note: | For FixPak 7, the db2_grp_lookup registry variable has been
enhanced to support the keyword domain, in addition to the
currently supported keyword local.
db2set db2_grp_lookup=domain should be set under the following conditions:
This parameter applies to both client and server configurations. Setting this parameter will tell DB2 to use a domain controller in the current domain to locate a controller for the accounts domain. Problem being solved: In order to enumerate groups (and to find out whether you are an administrator), DB2 uses an NT API to find a domain controller for the domain in which the account is defined. It uses an API that tries to find the Primary Domain Controller (PDC) and, failing that, a backup domain controller. If the machine that is running this API is also a domain controller, this will always work. If your machine is not a domain controller, then this methodology will fail when the PDC is down. When db2_grp_lookup=domain, DB2 will find a domain controller in YOUR domain with which to run the API to determine the domain controller for the accounts domain. This will not fail when the PDC is down. |
Parameter Operating Values Description System ------------------------------------------------------------------------------------ DB2_HASH_JOIN All DEFAULT=NO Specifies hash join as Values: YES or NO a possible join method when compiling an access plan.
Parameter Operating Values Description System ------------------------------------------------------------------------------------ DB2_NEW_CORR_SQ_FF All DEFAULT=OFF Affects the selectivity value Values: ON or OFF computed by the SQL optimizer for certain subquery predicates when it is set to ON. It can be used to improve the accuracy of the selectivity value of equality subquery predicates that use the MIN or MAX aggregate function in the select list of the subquery. For example: SELECT * FROM T WHERE T.COL = (SELECT MIN(T.COL) FROM T WHERE ...)
Parameter Operating Values Description System ------------------------------------------------------------------------------------ DB2_BINSORT AIX DEFAULT=NO Enables a new sort algorithm Values: YES or NO that reduces the CPU time and elapsed time of sorts. This new algorithm extends the extremely efficient integer sorting technique of DB2 V5.0 to all sort datatypes such as BIGINT, CHAR, VARCHAR, FLOAT and DECIMAL, as well as combinations of these datatypes. To enable this new algorithm, use the following command: db2set DB2_BINSORT = yes
Parameter Operating Values Description System ------------------------------------------------------------------------------------- DB2MEMDISCLAIM AIX DEFAULT=(not set) Depending on the workload being executed Values: YES or NO and the pool agents configuration, you may run into a situation where the committed memory for each DB2 agent will stay above 32MB even when the agent is idle. This behavior is expected and usually results in good performance as the memory is kept for fast re-use. However, on a memory constrained system, this may not be a desirable side effect. To avoid this condition issue the following: db2set DB2MEMDISCLAIM=yes Disclaiming memory tells the AIX operating system to stop paging the area so that it no longer occupies any real storage. Setting DB2MEMDISCLAIM to YES tells DB2 UDB to disclaim some or all memory once freed, depending on DB2MEMMAXFREE. This will ensure that the memory is made readily available for other processes as soon as it is freed. See also DB2MEMMAXFREE.
Parameter Operating Values Description System ------------------------------------------------------------------------------------- DB2MEMMAXFREE AIX DEFAULT=(not set) Specifies the amount of free memory that Values: 4000000 is retained by each DB2 agent. You may to 256000000 set this variable to a value between 4 and 256 MB. We recommend that if you use this feature, you specify a value of 8MB: db2set DB2MEMMAXFREE=8000000 See also DB2MEMDISCLAIM.
There is a new registry variable to tune bufferpools. This is how you use it:
db2set DB2BPVARS=<filename>
For example, the file f:\BPVARSFILE has the following content:
# turn on scatter read for raw DMS devices NT_SCATTER_DMSDEVICE=1 |
You would do the following using this file:
db2set DB2BPVARS=f:\BPVARSFILE
The currently supported parameter is:
Parameter Operating Values Description System ------------------------------------------------------------------------------------- NT_SCATTER_DMSDEVICE NT DEFAULT=0(OFF) Can be used to turn scatter Values=0(OFF) read on for DMS device or 1(ON) containers. This can only be enabled (turned ON) if DB2NTNOCACHE is set to ON in the registry. A warning message will be written to the db2diag.log if DB2NTNOCACHE is OFF (or not set), and scatter read will remain disabled. It is recommended for systems with a large amount of sequential prefetching against RAW containers.
The following is to be added to the rah information in Appendix Q.
To enhance performance, rah has been extended to use tree_logic on large systems. That is, rah will check how many nodes the list contains, and if that number exceeds a threshold value, it constructs a subset of the list and sends a recursive invocation of itself to those nodes. At those nodes, the recursively invoked rah follows the same logic until the list is small enough to follow the standard logic (now the "leaf-of-tree" logic) of sending the command to all nodes on the list. The threshold can be specified by environment variable RAHTREETHRESH, or defaults to 15.
In the case of a multiple-logical-node-per-physical-node system, db2_all will favor sending the recursive invocation to distinct physical nodes, which will then rsh to other logical nodes on the same physical node, thus also reducing inter-physical-node traffic. (This point applies only to db2_all, not rah, since rah always sends only to distinct physical nodes.)
This version of rah has nearly identical syntax and semantics as the old version (supplied in the product as rah.sh_old), except for some minor restrictions on what options can be used:
Note: | The () and ## substitutions should work identically to the old rah/db2_all. |
export RAHTREETHRESH=nn where nn can be any positive integer
or defaults to 15.
These two standards are documented here.
The Unicode character encoding standard is a fixed-length, character encoding scheme that includes characters from almost all the living languages of the world. Unicode characters are usually shown as "U+xxxx" where xxxx is the hexadecimal code of the character.
Each character is 16 bits (2 bytes) wide, regardless of the language. While the resulting 65 536 code elements are sufficient for encoding most of the characters of the major languages of the world, the Unicode standard also provides an extension mechanism, that allows for encoding as many as a million more characters. This extension reserves a range of code values (U+D800 to U+D8FF, known as "surrogates") for encoding some 32-bit characters as two successive code elements.
The International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) 10646 standard (ISO/IEC 10646) specifies the Universal Multiple-Octet Coded Character Set (UCS) that has a 2-byte version (UCS-2) and a 4-byte version (UCS-4). The 2-byte version of this ISO standard UCS-2 is identical to Unicode without surrogates. ISO 10646 also defines an extension technique, for encoding some UCS-4 codes in a UCS-2 encoded string. This extension called UTF-16, is identical to Unicode with surrogates.
DB2 UDB supports UCS-2, that is, Unicode without surrogates.
With UCS-2 or Unicode encoding, ASCII and control characters are also two bytes long, and the lead byte is zero. For example, NULL is U+0000 and CAPITAL LETTER A is represented by U+0041. This could be a major problem for ASCII-based applications and ASCII file systems, because in a UCS-2 strings, extraneous NULLs may appear anywhere in the string. A transformation algorithm, known as UTF-8, can be used to circumvent this problem for programs that rely on ASCII code being invariant.
UTF-8 (UCS Transformation Format 8), is an algorithmic transformation which transforms fixed-length UCS-2 and UCS-4 characters into variable-length byte strings. In UTF-8, ASCII characters are represented by their usual single-byte codes, but non-ASCII characters in UCS-2 become two or three bytes long. In other words, UTF-8 transforms UCS-2 characters to a multi-byte codeset, for which ASCII is invariant. The number of bytes for each UCS-2 character in UTF-8 format can be determined from the following table:
UCS-2 (hex) UTF-8 (binary) Description ------------ -------------------------- ---------------- 0000 to 007F 0xxxxxxx ASCII 0080 to 07FF 110xxxxx 10xxxxxx up to U+07FF 0800 to FFFF 1110xxxx 10xxxxxx 10xxxxxx other UCS-2 NOTE: The range D800 to DFFF is to be excluded from treatment by the third row of this table which corresponds to the UCS-4 range 0000 0800 to 0000 FFFF.
In all the above, a series of x's indicate the UCS bit representation of the character. For example, U0080 transforms into 11000010 10000000.
Within IBM, the UCS-2 code page has been registered as code page 1200. All code pages are defined with growing character sets, that is, when new characters are added to a code page, the code page number does not change. Code page 1200 always refers to the current version of Unicode/UCS-2, and has been used for UCS-2 support in DB2 UDB.
A specific repertoire of the UCS standard, as defined by Unicode 2.0 and ISO/IEC 10646-1, has also been registered within IBM as CCSID 13488. This CCSID (13488) has been used internally by DB2 UDB for storing graphic string data in euc-Japan and euc-Taiwan databases. CCSID 13488 and code page 1200 both refer to UCS-2, and are handled the same way except for the value of their "double-byte" (DBCS) space:
CP/CCSID Single Byte (SBCS) space Double Byte (DBCS) space --------- ------------------------ ------------------------ 1200 N/A U+0020 13488 N/A U+3000 NOTE: In a UCS-2 database, U+3000 has no special meaning.
Regarding the conversion tables, since code page 1200 is a superset of CCSID 13488, the exact same (superset) tables are used for both.
In IBM, UTF-8 has been registered as CCSID 1208 with growing character set (sometimes also referred to as code page 1208). As new characters are added to the standard, this number (1208) will not change either. 1208 is used as the multi-byte code page number for DB2's UCS-2/UTF-8 support.
DB2 UDB supports UCS-2 as a new multi-byte code page. The MBCS code page number is 1208, which is the database code page number, and the code page of character string data within the database. The double-byte code page number (for UCS-2) is 1200 which is the code page of graphic string data within the database. When a database is created in UCS-2/UTF-8, CHAR, VARCHAR, LONG VARCHAR, and CLOB data, are stored in UTF-8, and GRAPHIC, VARGRAPHIC, LONG VARGRAPHIC, and DBCLOB data, are stored in UCS-2. We will simply refer to this as a UCS-2 database.
By default, databases are created in the code page of the application creating them. Therefore, if you create your database from a UTF-8 client (for example, the UNIVERSAL locale of AIX), or if DB2CODEPAGE environment variable on the client is set to 1208, your database will be created as a UCS-2 database. Alternatively, you can explicitly specify "UTF-8" as the CODESET name, and use any valid two letter TERRITORY code supported by DB2 UDB. Refer to the Administration Guide, and the appendix on "NLS Support" for a complete list of territory codes supported by DB2 UDB.
For example, to create a UCS-2 database from the CLP, with the territory code for United States, issue:
DB2 CREATE DATABASE dbname USING CODESET UTF-8 TERRITORY US
The default collation sequence for a UCS-2 database is IDENTITY, which provides UCS-2 code point order. Therefore, by default, all UCS-2/UTF-8 characters are ordered and compared according to their UCS-2 code point sequence.
All cultural-sensitive parameters such as date/time format, decimal separator, and others, are based on the current territory of the client.
A UCS-2 database allows connection from every single-byte and multi-byte code page supported by DB2 UDB. Code page character conversions between client's code page and UTF-8 are automatically performed by the database manager. Data in graphic string types, is always in UCS-2 and does not go through code page conversions. The Command Line Processor (CLP) environment is an exception. If you SELECT graphic string (UCS-2) data from the CLP, the returned graphic string data is converted (by the CLP) from UCS-2 to the code page of your client environment.
Every client is limited by the character repertoire, the input methods, and the fonts supported by its environment, but the UCS-2 database itself accepts and stores all UCS-2 characters. Therefore, every client usually works with a subset of UCS-2 characters, but the database manager allows the entire repertoire of UCS-2 characters.
When characters are converted from a local code page to UTF-8, there is a possibility of expansion in the number of bytes. There is no expansion for ASCII characters, but other UCS-2 characters expand by a factor of two or three. The number of bytes of each UCS-2 character in UTF-8 format can be determined from the above table (section about UTF-8).
All data types supported by DB2 UDB, are also supported in a UCS-2 database. In particular, graphic string data, is supported for UCS-2 database and is stored in UCS-2/Unicode. Every client, including SBCS clients, can work with graphic string data types in UCS-2/Unicode when connected to a UCS-2 database.
A UCS-2 database is like any MBCS database where character string data is measured in number of bytes. When working with character string data in UTF-8, one should not assume that each character is one byte. In multi-byte UTF-8 encoding, each ASCII character is one byte, but non-ASCII characters take two or three bytes each. This should be taken into account when defining CHAR fields. Depending on the ratio of ASCII to non-ASCII characters, a CHAR field of size n bytes, can contain anywhere from n/3 to n characters.
Using character string UTF-8 encoding versus graphic string UCS-2 data type also has an impact on the total storage requirements. For a situation where the majority of characters are ASCII, with some non-ASCII characters in between, storing UTF-8 data may be a better alternative because the storage requirements are closer to one byte per character. On the other hand, for situations where the majority of characters are non-ASCII characters that expand to three-byte UTF-8 sequences (for example, ideographic characters), the UCS-2 graphic-string format may be a better alternative because every UCS-2 character requires exactly two bytes, rather than three bytes for each corresponding character in UTF-8 format.
SQL scalar functions that operate on character strings, such as LENGTH, SUBSTR, POSSTR, MAX, MIN, and the like, in MBCS environments operate on number of "bytes" rather than number of "characters". The behaviour is the same in a UCS-2 database. For example, the LENGTH of a character string field, is the length in the number of bytes not the number of characters. For a description of the behaviour of these functions, refer to the DB2 UDB SQL Reference.
SQL CHAR data types are supported by C language's char data type in user programs. SQL GRAPHIC data types are supported by sqldbchar in user C programs. Note that, for a UCS-2 database, sqldbchar data is always in big-endian (high byte first) format. When an application program is connected to a UCS-2 database, character string data is converted between the application code page and UTF-8 by DB2 UDB, but graphic string data is always in UCS-2.
In a UCS-2 database, all identifiers are in multi-byte UTF-8. Therefore, it is possible to use any UCS-2 character in identifiers where the use of a character in the extended character set (for example, an accented character, or a multi-byte character) is allowed by DB2 UDB. Please refer to the appendix "Naming Rules" in the Administration Guide for details of which identifiers allow use of extended characters.
Clients can enter any character which is supported by their SBCS/MBCS environment, and all the characters in the identifiers will be converted to UTF-8 by the database manager. Two points need to be taken into account when specifying National Language characters in identifiers in a UCS-2 database:
UCS-2 literals can be specified in two ways:
When using the Command Line Processor (CLP), the first method is easier if the UCS-2 character exists in the local application code page (for example, for entering any code page 850 character from a terminal that is using code page 850). The second method should be used for characters which are outside the application code page repertoire (for example, for specifying Japanese characters from a terminal that is using code page 850).
Pattern matching is one area where the behaviour of existing MBCS databases is slightly different from the behaviour of a UCS-2 database.
For MBCS databases in DB2 UDB, the current behaviour is as follows: If the match-expression contains MBCS data, the pattern can include both SBCS and MBCS characters. The special characters in the pattern are interpreted as follows:
If the match-expression contains graphic string DBCS data, the expressions contain only DBCS characters. The special characters in the pattern are interpreted as follows:
In a UCS-2 database, there is really no distinction between "single-byte" and "double-byte" characters; every UCS-2 character occupies two bytes. Although the UTF-8 format is a "mixed-byte" encoding of UCS-2 characters, there is no real distinction between SBCS and MBCS characters in UTF-8. Every character is a UCS-2 character, irrespective of its number of bytes in UTF-8 format. When specifying a character string, or a graphic string expression, an underscore refers to one UCS-2 characters and a percent refers to a string of zero or more UCS-2 characters.
On the client side, the character string expressions are in the code page of the client, and will be converted to UTF-8 by the database manager. SBCS client code pages, do not have any DBCS percent or DBCS underscore, but every supported code page contains a single-byte percent (corresponding to U+0025) and a single-byte underscore (corresponding to U+005F). The interpretation of special characters for a UCS-2 database is as follows:
DBCS code pages, additionally support a DBCS percent sign (corresponding to U+FF05) and a DBCS underscore (corresponding to U+FF3F). These characters have no special meaning for a UCS-2 database.
For the optional "escape-expression" which specifies a character to be used to modify the special meaning of the underscore and percent characters, only ASCII characters, or characters that expand into a two-byte UTF-8 sequence, are supported. If you specify an ESCAPE character which expands to a three-byte UTF-8 value, you will get an error message (SQL0130N error, SQLSTATE 22019).
The DEL, ASC, and PC/IXF file formats are supported for a UCS-2 database as described in this section. The WSF format is not supported. Also, autoloader is not supported for a UCS-2 database at the present time.
When exporting from a UCS-2 database to an ASCII delimited (DEL) file, all character data is converted to the application code page. Both character string and GRAPHIC string data are converted to the same SBCS or MBCS code page of the client. This is the existing behaviour for the export of any database, and cannot be changed because the entire ASCII delimited file can have only one code page. Therefore, if you export to an ASCII delimited file, only those UCS-2 characters that exist in your application code page would be saved. Other characters are replaced with the default substitution character for the application code page. For UTF-8 clients (code page 1208) there is no data loss because all UCS-2 characters are supported by UTF-8 clients.
When importing from an ASCII file (DEL or ASC) to a UCS-2 database, character string data is converted from the application code page to UTF-8, and GRAPHIC string data is converted from the application code page to UCS-2. There is no data loss. If you want to import ASCII data that has been saved under a different code page, you should switch the data file code page before issuing the import command. One way to accomplish this, is to set DB2CODEPAGE to the code page of the ASCII data file.
The range of valid ASCII delimiters, for SBCS and MBCS clients is identical to what is currently supported by DB2 UDB for these clients. The range of valid delimiters for UTF-8 clients is 0x01 to 0x7F, with the usual restrictions. Refer to the "IMPORT/EXPORT/LOAD Utility File Formats" appendix in the Command Reference for a complete list of these restrictions.
When exporting from a UCS-2 database to a PC/IXF file, character string data is converted to the SBCS/MBCS code page of the client. GRAPHIC string data is not converted and is stored in UCS-2 (code page 1200). There is no data loss.
When importing from an PC/IXF file to a UCS-2 database, character string data is assumed to be in the SBCS/MBCS code page stored in the PC/IXF header and GRAPHIC string data is assumed to be in the DBCS code page stored in the PC/IXF header. Character string data is converted by the IMPORT utility from the code page specified in the PC/IXF header to the code page of the client, and then from the client code page to UTF-8 (by the INSERT statement). GRAPHIC string data is converted by the IMPORT utility from the DBCS code page specified in the PC/IXF header directly to UCS-2 (code page 1200).
LOAD directly places the data into the database and by default, assumes data in ASC or DEL files is in the code page of the database. Therefore, by default no code page conversion takes place for ASCII files. When the code page of the data file has been explicitly specified (using the MODIFIED BY codepage=x command parameter), LOAD uses this information to convert from the specified code page x into the database code page before loading the data. For PC/IXF files, LOAD always converts from the code pages specified in the IXF header to the database code page (1208 for CHAR, 1200 for GRAPHIC).
The code page of DBCLOB files (as specified using the MODIFIED BY lobsinfile command parameter) is always 1200 for UCS-2. The code page of the CLOB files is the same as the code page of the data files being imported, loaded or exported. For example, for load or import using PC/IXF format, the CLOB file is assumed to be in the code page specified by the PC/IXF header. If the DBCLOB file is in ASC or DEL format, for LOAD the CLOB data is assumed to be in the code page of the database (unless explicitly specified otherwise using the MODIFIED BY codepage=x command parameter), and for IMPORT it is assumed to be in the client application code page.
The NOCHECKLENGTHS option is always set to TRUE for a UCS-2 database because in a UCS-2 database, any SBCS can be connected to the database for which there is no DBCS code pages; and also because character strings in UTF-8 format usually have different lengths than corresponding lengths in client code pages.
For an application connected to a UCS-2 database, the graphic string data is always in UCS-2 (code page 1200). For applications connected to non UCS-2 databases, the graphic string data is in the applications DBCS code page; or, not allowed if the application code page is SBCS. For example, when a 932 client is connected to a Japanese non UCS-2 database, then the graphic string data is in code page 301. But for the 932 client applications connected to a UCS-2 database, the graphic string data is in UCS-2.
DB2 UDB now supports two implementations of the VI architecture: One from GigaNet and the other from ServerNet. Each is presented in this section.
Virtual Interface (VI) Architecture is the inter-node communication protocol alternative to TCP/IP in a Windows NT massively parallel processing (MPP) environment. VI is a new communication architecture that was developed jointly by Intel, Microsoft, and Compaq to improve performance over a System Area Network (SAN). For more information on the architecture, visit:
http://www.viarch.org
Figure 1. Network Interface Card and Protocol
There are some similarities between the Public Interconnect which uses as an example Ethernet and TCP/IP and the Private Interconnect which uses a Network Interface Card and a protocol. This is shown in Figure 1. The Network Interface Card and protocol used in this instance is a GigaNet Network Interface Card and the VI protocol.
VI Architecture has low latency and high bandwidth. In a communication-intensive environment, using VI Architecture improves the overall system throughput. The greater the number of nodes in the cluster, and the greater the amount of data transfered, the greater the benefit from using VI Architecture.
DB2 UDB supports VI Architecture implementations that comply with the Virtual Interface Architecture Specification, Version 1.0; the Intel Virtual Interface (VI) Architecture Developers' Guide, Version 1.0; and pass the "Virtual Interface Architecture Conformance Suite". The specification is found at:
http://www.intel.com/design/servers/vi/the_spec/specification.htm
The Developer's Guide and information on the conformance suite is found at:
http://www.intel.com/design/servers/vi/developer/ia_imp_guide.htm
Detailed installation information is found in DB2 Enterprise - Extended Edition for Windows NT Quick Beginnings.
If you are not sure of the service level of the DB2 UDB product you have already installed, you can determine the UDB service level by going into the registry:
If you contact DB2 Service and Support about VI, the service level will be helpful to determine your installed DB2 level of code including any FixPaks.
This product must be installed in each of the partitions/nodes using the Virtual Interface Protocol. During the installation procedure, when prompted choose "This machine will be an instance owning node" on each of the partitions/nodes.
Update the hosts file with the IP address and host name for each of the partitions/nodes. The hosts file is found under the \winnt\system32\drivers\etc\ directory on the drive where the operating system was installed. The hosts file must be updated on each of the nodes.
Create the partitioned database (MPP) instance using the instance create utility. Choose one machine to act as the co-ordinator node. On this machine, open a DB2 Command Window and enter:
db2icrt <instance_name> /mpp /u:<username>,<password>
This machine is then known as the co-ordinator node or the instance-owning machine. Node 0 is automatically created on this machine.
On the other partitions/nodes in the database, open a DB2 Command Window and enter:
db2ncrt /n:<node_number> /u:<username>,<password> /i:<instance_name> /o:<instance_owner_name>
The node_number is used to uniquely identify the database partition server within the database environment. The number must be from 1 to 999. The instance_owner_name is the computer name of the instance-owning machine (co-ordinator node).
Testing the installation and create an index:
When problems occur in this environment, you can take action based on the type of problem as presented below:
Ensure c:\profiles is present and is present with each of the share name "profiles". Ensure all partitions are "pingable" from the co-ordinator node.
Review the explanation for the returned error code by using the db2 ? sqlxxxx command. There will be a suggested action associated with this error which you should follow.
A system error may be returned. If this is the case, use a db2stop and retry the db2start. If the problem persists, attempt to reboot on all partitions and then retry.
Ensure all partitions have the same date, time, and time zone. The time does not need to be identical: within one hour is sufficient.
Ensure all the partitions are in one domain and that the user name and password used belong to the following groups:
Review the contents of the Control panel-->Services to ensure that all the DB2:<instance_name> -X services have the correct DB2ADMIN account information.
Ensure you are running the command in a "DB2 command window". The title of this window is "DB2 CLP".
Run db2set -g DB2TEMPDIR=C:\TMP on all machines in the instance. Ensure the DB2 Remote Command Service is started and with the correct DB2ADMIN account information. Finally, ensure c:\temp and c:\tmp are present.
IBM announced support for Virtual Interface (VI) Architecture with DB2 UDB EEE V5.2. The documentation provided no specifics regarding the VI products that have been tested with DB2. This document provides specific information for GigaNet Interconnect.
To find out about other products adhering to VI Architecture and supported by DB2 UDB EEE V5.2, please contact the DB2 UDB support organization at http://www.software.ibm.com/data or call 1-800-237-5511 (only in the U.S.A).
To find out about GigaNet products, or to contact GigaNet Service and Support, please use the following URL: http://www.giganet.com/
The list of the hardware and software required to setup this environment include the following products:
The steps required to ensure that GigaNet Interconnect can work with DB2 UDB are shown below. Each step is a summary of what is required at each step: all of the details associated with each step are not presented here. You should also use the referenced documenation at each step which does provide detailed instructions and direction needed.
Each GigaNet GNN1000 is packaged with a GigaNet cLAN Software CD-ROM. The CD-ROM contains all of the necessary software to set-up the GigaNet Interconnect. In addition, the CD-ROM also contains the VI Architecture SDK and Adobe Acrobat Reader. These two items are only needed by those individuals that are developing VI-enabled applications.
Summary of steps:
Here are the steps:
GigaNet Adapter Driver software is also available on GigaNet's web site, http://www.giganet.com. Please refer to the download and installation instructions found on the support page of GigaNet's web site.
The installation of the GNN1000 Adapter Driver causes the node to re-boot.
Install the GigaNet cLAN Management Console. Please refer to the GigaNet GNN1000 User Guide for installation instructions and additional information about the cLAN Management Console. Here are additional details on the installation procedure:
GigaNet cLAN Management Console software is also available on GigaNet's web site, http://www.giganet.com. Please refer to the download and installation instructions found on the support page of GigaNet's web site.
The installation of the cLAN Management Console may cause the node to re-boot.
If the connection test does not appear to be working, refer to the troubleshooting sections of the GigaNet GNN1000 User Guide and the GigaNet GNX5000 User Guide.
On each database partition server in the instance, set the following DB2 registry variables and carry out the following tasks:
IBM announced support for Virtual Interface (VI) Architecture with DB2 EEE UDB V5.2. The documentation provided no specifics regarding the VI products that have been tested with DB2. This document provides specific information for ServerNet Interconnect.
To find out about other products adhering to VI Architecture and supported by DB2 UDB EEE V5.2, please contact the DB2 UDB support organization at http://www.software.ibm.com/data or call 1-800-237-5511 (only in the U.S.A).
To find out about ServerNet products, or to contact ServerNet Service and Support, please use the following URL: http://www.servernet.com/
The list of the hardware and software required to setup this environment include the following products:
The following are the steps required to ensure that ServerNet Interconnect can work with DB2 UDB. Each step is a summary of what is required at each step: all of the details associated with each step are not presented here. You should also use the referenced documenation at each step which does provide detailed instructions and direction needed.
The steps shown below also assume that you are only using up to six (6) nodes in the cluster. Contact ServerNet if you have a requirement to use more than six nodes.
Here are the steps:
If the hardware does not appear to be working, refer to the ServerNet-I Virtual Interface Software Release Document, (product ID N0031) for additional troubleshooting help.
On each database partition server in the instance, set the following DB2 registry variables and carry out the following tasks:
To specify the IDENTITY collating sequence when creating a database, specify SQL_CS_NONE (which implements a binary collating sequence) in the database description block (SQLEDBDESC) structure.
In the description of the pMediaTargetList parameter (value SQLU_ADSM_MEDIA), the sentence "No additional input is required." should be replaced with:
If an SQLU_MEDIA_ENTRY structure is not being used to specify a path for the backup image, initialize the MEDIA pointer in the SQLU_MEDIA_LIST_TARGETS structure to NULL.
The following section is missing from the API description:
Required Connection Database
There is no callerac parameter available in the sqluhgne API with which to retrieve DDL information recorded in the history file. To retrieve this information, a new API has been created:
SQL_API_RC SQL_API_FN sqluhget( unsigned short Handle, // IN: Handle returned from sqluhops unsigned short CallerAction, // IN: For special handling of command field unsigned long Reserved_1, // Reserved for future use struct sqluhinfo * pHistoryInfo, // IN/OUT: Pointer to entry data struct sqluhadm * pAdminInfo, // IN/OUT: Pointer to enhanced adminstrative data void * pReserved, // Reserved for future use struct sqlca * pSqlca); // SQLCA
The parameters of this API have the same meanings as the corresponding parameters of the sqluhgne API, with the exception of CallerAction, which is new. The reserved fields of this API, Reserved_1 and pReserved, must be initialized to 0 and NULL, respectively. The possible values of CallerAction are:
SQLUH_GET_NEXT_ENTRY Retrieves the next matching entry. No DDL information is returned to the caller, but the length field of the sqluhadm.command structure will be set to the length of any DDL field that exists for that entry. SQLUH_GET_DDL If this action is passed to the API immediately after fetching an entry, the DDL data associated with that entry is returned. It is the responsibility of the caller to indicate - through the sqluhadm.command.length field - the number of bytes of memory which have been allocated for DDL data to be returned through the sqluhadm.command.data field. SQLUH_GET_NEXT_ENTRY_DDL Retrieves the next matching entry. It also returns any DDL information which has been recorded for the entry. The caller must indicate - through the sqluhadm.command.length field - the number of bytes of memory which have been allocated for the sqluhadm.command.data field.
The second parameter of this API, which was "int callerac", is now "struct sqluhadm *pAdminInfo". The API definition is:
SQL_API_RC SQL_API_FN sqluhgne( unsigned short Handle, struct sqluhadm * pAdminInfo, struct sqluhinfo * pHistoryInfo, struct sqlca * pSqlca);
The description of the SQLUH_LIST_ADM_ALTER_TABLESPACE caller action should be changed to:
Select only the ALTER TABLESPACE records that pass the other filters. The DDL field associated with an entry will not be returned. To retrive the DDL information for an entry, sqluhgne must be called with a caller action of SQLUH_GET_DDL immediately after the entry is fetched.
This API affects only the partition to which an application is directly connected; the load utility operates on a single database partition only.
The Usage Notes section incorrectly states that "summary tables" that are dependent on the tables being loaded are placed in check pending state. It should state that "summary tables defined with REFRESH IMMEDIATE", that are dependent on the tables being loaded, are placed in check pending state.
The valid entries for the SQLE-CLIENT-INFO TYPE element should be changed to:
SQLE_CLIENT_INFO_USERID SQLE_CLIENT_INFO_WRKSTNNAME SQLE_CLIENT_INFO_APPLNAME SQLE_CLIENT_INFO_ACCTSTR
Table 46, "Updateable Database Manager Configuration Parameters" has an additional entry for a new parameter, catalog_noauth:
Parameter Name Token Token Value Data Type -------------- ----- ----------- --------- catalog_noauth SQLF_KTN_CATALOG_NOAUTH 314 Uint16
The definition of the SQLUHADM structure should be changed to:
SQL_STRUCTURE sqluhadm { char end_time[SQLUH_TIMESTAMP_SZ+1]; // OUT: Completion time stamp of the event char id[SQLUH_ID_SZ+1]; // OUT: Unique identifier of a dropped table struct sqlca event_sqlca; // OUT: SQLCA associated with the entry struct sqlchar command; // IN/OUT: DDL command information };
Notes:
Change "struct Init_output *," in the "C API Syntax" section to "struct Data *,".
When running the db2start command (or using the NET START command) to start the database manager on the Windows NT environment, the command will not return any warnings if any communication subsystem failed to start. The user should always examine the NT Event Log or the DB2DIAG.LOG for any errors that may have occurred during the running of db2start. This occurs because the database manager on a Windows NT environment is implemented as an NT service, and hence does not return an error if the service is started successfully.
There is a problem with the "cob2 -dll" command of VisualAge COBOL for Windows NT Version 2.2. The original command in "x:\sqllib\samples\cobol\bldvacbs.bat" for creating a DLL is:
cob2 -dll %1.obj db2api.lib
This version of VisualAge COBOL returns the following error:
fatal error : Unable to open "E:\sqllib\samples\cobol\db2api.lib".
This will be corrected in the first FixPak of VisualAge COBOL for Windows NT. Until that FixPak is available, however, the problem can be solved by replacing the above command with:
ilib /nol /gi:%1 %1.obj ilink /free /nol /dll db2api.lib %1.exp %1.obj iwzrwin3.obj
There is a new environment variable for specifying which Java Development Kit to use on the Windows operating system for DB2 SQLJ. The environment variable is called DB2JVIEW, and it applies to all DB2 SQLJ commands (db2profc, db2profp, profdb and profp), except sqlj. If DB2JVIEW=1, Microsoft SDK for Java will be used; that is, if you call "profp", it will run as "jview sqlj.runtime.profile.util.ProfilePrinter". If DB2JVIEW=0, or not set, Sun JDK will be used; that is, if you call "profp", it will run as "java sqlj.runtime.profile.util.ProfilePrinter". The command "sqlj" will ignore this variable, and always use the Sun JDK to run.
There have been several changes to the Java sample programs documented in this chapter. The stored procedure and UDF programs have been split into separate client and server programs. This allows users on client machines to remotely access stored procedures and UDFs on server machines. The affected files are now as follows:
The script file embprep has been created to precompile and bind the SQLJ programs. It is called by the makefile and executes the db2profc command on the Java embedded SQL sample programs. It allows optional arguments for database, user ID and password, thereby facilitating remote binding to databases on DB2 servers from client machines.
In SCO UnixWare 7, functions defined in a client program and not explicitly exported are not available to a stored procedure library called by the client program. An alternative to recoding the client program in order to make the functions external is to use the compile option -Wl,-Bexport, which exports all functions defined in the client program. This option has been added to the makefiles for SCO UnixWare 7 C and C++ to compile the sample client program fillcli, which calls the stored procedure library fillsrv for these languages. The makefiles and sample programs are located in the sqllib/samples/c and sqllib/samples/cpp directories.
In the "Micro Focus COBOL" section, the link option -lthread in the bldmfcc script file is no longer needed as the thread library has been incorporated into the DB2 library, represented by the -ldb2 option.
To build and run Java programs on HP-UX now requires the HP-UX Developer's Kit for Java Release 1.1.3 (or higher).
DB2 does not support Java stored procedures and Java user-defined functions (UDFs) on HP-UX and SCO UnixWare servers. Because of this restriction, the following Java sample programs cannot be run against DB2 databases on HP-UX and SCO UnixWare:
DB2Stp.java DB2Udf.java CatUdf.sqlj DropUdf.sqlj Stp.sqlj Udfsrv.java
On Silicon Graphics IRIX, it is a known problem that the connection context close() method may cause a trap. To get around the problem, leave the connection context to be closed automatically during garbage collection.
There have been several changes to the Java sample programs documented in this chapter. The stored procedure and UDF programs have been split into separate client and server programs. This allows users on client machines to remotely access stored procedures and UDFs on server machines. The affected files are now as follows:
The script file embprep has been created to precompile and bind the SQLJ programs. It is called by the makefile and executes the db2profc command on the Java embedded SQL sample programs. It allows optional arguments for database, user ID and password, thereby facilitating remote binding to databases on DB2 servers from client machines.
The description of SQLPrepare() does not indicate that deferred prepare is now ON by default.
This means that the processing of the SQL statements passed to the database using SQLPrepare() does not take place when the SQLPrepare() function is called. Instead, it is processed when SQLExecute() is called.
Minimal error checking is performed by DB2 CLI when SQLPrepare() is called (that is, valid function arguments, and so on). The true SQLSTATEs are not returned from the prepare until SQLExecute() is called. Your application should therefore expect SQLSTATEs from SQLPrepare() after the call to SQLExecute(), followed by SQLSTATEs from the SQLExecute() call itself.
For more information see "Deferred Prepare now on by Default" in "Appendix B. Migrating Applications", in the section "Changes from Version 2.1.1 to 5.0.0". This describes how to turn deferred prepare off for Version 2 programs, as well as a further description of the deferred prepare process.
The description states that cursors are closed by a call to SQLDisconnect(). This is not the case, however, with stored procedures.
An application should not rely on SQLDisconnect() to close cursors, even if it is not a stored procedure. In both cases, the cursor should be closed using SQLCloseCursor(), then the statement handle freed with a call to SQLFreeHandle() with a HandleType of SQL_HANDLE_STMT.
The book indicates that the stored procedures must be run on a remote server. This is no longer the case. Stored procedures running on a local server can return result sets.
The book indicates that the stored procedure should call SQLFreeStmt() with either SQL_DROP or SQL_CLOSE; however, only SQL_CLOSE should be used.
All stored procedures should be registered in the new SYSCAT.PROCEDURES and SYSCAT.PROCPARMS catalog tables using the CREATE PROCEDURE statement. For more information about this SQL statement, see the SQL Reference.
If the stored procedures are not registered using the CREATE PROCEDURE statement, CLI or ODBC applications will not know that they exist. For more information, see the section "Replacement of the Pseudo Catalog Table for Stored Procedures" in the Migration Appendix.
There is an error in the documentation for SQLFetchScroll() in the version 5.0 Call Level Interface Guide and Reference. The table for SQL_FETCH_RELATIVE rules should be replaced with the following table:
SQL_FETCH_RELATIVE rules:
Table 3. SQL_FETCH_RELATIVE Rules:
Condition | First row of new rowset |
---|---|
(Before start AND FetchOffset > 0) OR (After end AND FetchOffset < 0) | -- a |
Before start AND FetchOffset <= 0 | Before start |
CurrRowsetStart = 1 AND FetchOffset < 0 | Before start |
CurrRowsetStart > 1 AND CurrRowsetStart + FetchOffset < 1 AND |FetchOffset| > RowsetSize | Before start |
CurrRowsetStart > 1 AND CurrRowsetStart + FetchOffset < 1 AND |FetchOffset| <= RowsetSize | 1 b |
1 <= CurrRowsetStart + FetchOffset <= LastResultRow | CurrRowsetStart + FetchOffset |
CurrRowsetStart + FetchOffset > LastResultRow | After end |
After end AND FetchOffset >= 0 | After end |
This command is available on Windows NT, Windows 95, and Windows 98.
The following switches are now supported:
/c Invoke command.exe with the /c option (that is, execute the command, and then terminate. For example, "db2cmd /c dir" causes the "dir" command to be invoked in a command window, and then the command window closes. /w Wait until the cmd.exe process ends. For example, "db2cmd /c /w dir" invokes the "dir" command, and db2cmd.exe does not end until the command window closes. /i Run the command window, sharing the same console and inheriting file handles. For example, "db2cmd /c /w /i db2 get dbm cfg > myoutput" invokes cmd.exe to run the db2 command and to wait for its completion. A new console is not assigned, and stdout is piped to file "myoutput". /t Instead of using "DB2 CLP" as the title of the command window, inherit the title from the invoking window. This is useful if you want to set up an icon with a different title that invokes "db2cmd /t".
Note: | All switches must appear before any commands to be executed. For example: db2cmd /t db2 |
If DB21061E ("Command line environment not initialized.") is returned when bringing up the CLP-enabled DB2 window, or running CLP commands on Windows 95 or Windows 98, the operating system may be running out of environment space. Check the config.sys file for the SHELL environment setup parameter, and increase its value accordingly. For example:
SHELL=C:\COMMAND.COM C:\ /P /E:32768
The "Command Parameters" section is missing the following description:
-evm evmon-name Specifies the name of the event monitor whose traces are to be analyzed.
The "-db" and "-evm" keywords are case sensitive.
Option "-a" generates statistics for all users on the database. If used with option "-e", it generates the DDL for all user tables in the database, but statistics are not generated.
Notes:
The "Command Parameters" section has been changed to:
variable=value Sets a specified variable to a specified value. To delete a variable, do not specify a value for the specified variable. Changes to settings take effect after the instance has been restarted. -g Accesses the global profile variables. -i instance Specifies the instance profile to use instead of the current, or default. node-number Specifies a number listed in the db2nodes.cfg file. -all Displays all occurrences of the local environment variables as defined in: o The environment, denoted by [e] o The node level registry, denoted by [n] o The instance level registry, denoted by [i] o The global level registry, denoted by [g].
-null Sets the value of the variable at the specified registry level to null. This avoids having to look up the value in the next registry level, as defined by the search order. -r instance Resets the profile registry for the given instance. -n DAS node Specifies the remote DB2 administration server node name. -u user Specifies the user ID to use for the administration server attachment. -p password Specifies the password to use for the administration server attachment. -l Lists all instance profiles. -lr Lists all supported registry variables. -v Specifies verbose mode. -h/-? Displays help information. When this option is specified, all other options are ignored, and only the help information is displayed.
Although this utility is installed at the server, it can be run at a client; to do so, copy the executable from the server to the client.
In an MPP environment, the utility must be run from the catalog node of the specified database.
To use tape devices, DB2 users on SCO UnixWare 7 need to specify BUFFER to be 16. The default value of BUFFER is 1024 pages. If BUFFER is set to zero, the database manager configuration parameter BACKBUFSZ must be set to 16.
Table 6 ("Valid Delimiters"), and all references to it, should be removed. The information contained in the note at the bottom of the table, and the restrictions that follow the note, are correct, however.
Note: | In a DBCS environment, the pipe (|) character delimiter is not supported. |
Replace the section "DB2 Data Links Manager Considerations" in the Usage Notes with the following:
To ensure that a consistent copy of the table and the corresponding files referenced by the DATALINK columns are copied for export, do the following:
EXPORT is executed as an SQL application. The rows and columns satisfying the SELECT statement conditions are extracted from the database. For the DATALINK columns, the SELECT statement should not specify any scalar function. The export utility uses APIs to extract parts of the DATALINK value, such as link type, file server name, file path name, and comments.
Successful execution of EXPORT results in generation of the following files:
Use the dlfm_export utility to export files from a file server as follows:
Usage: dlfm_export <control file name> [<tar file name>] Description: <control file name> This is the file name that was generated by running the export utility on the DB2 client. <tar file name> This is the name of the archive file that will be generated (the default is export.tar in the current working directory).
A corresponding utility called dlfm_import is provided to retrieve and to restore files from the archive that dlfm_export generates. This utility must be used whether the archived files are being restored on the same or on a different file server. Use the dlfm_import utility to retrieve files from the archive as follows:
Usage: dlfm_import [<tar file name>] Description: <tar file name> This is the name of the archive file that will be used to recover the files (the default is export.tar in the current working directory).
Notes:
Exporting Between Instances
The table below shows how to export DB2 data and the files that are referenced by the instance called SystemA to the instance called SystemB. SystemA uses file servers DLFM1 and DLFM2. SystemB uses file servers DLFMX and DLFMY. The files on DLFM1 will be exported to DLFMX, and the files on DLFM2 will be exported to DLFMY.
+------------------------------------------------------------------------------+ | InstanceA with File Servers DLFM1 and DLFM2 | Step | +------------------------------------------------------------------------------+ | DB2 Data | File1 for DLFM1 | File2 for DLFM2 | | | on File | | | | +----------+-----------------+-----------------+-------------------------------+ | | Run the | Run the | 1. Run dlfm_export (as root) | | | dlfm_export | dlfm_export | on both file servers. | | | command | command | This will produce an archive | | | | | on both file servers. | +----------+-----------------+-----------------+-------------------------------+ +------------------------------------------------------------------------------+ | InstanceB with File Servers DLFMX and DLFMY | Step | +------------------------------------------------------------------------------+ | | On DLFMX, | On DLFMY, | 2. Run dlfm_import (as root) | | | restore from an | restore from an | on both file servers. | | | archive | archive | | +----------+-----------------+-----------------+-------------------------------+ | | | | 3. Run the IMPORT command on | | | | | InstanceB, with the parameter | | | | | DL_URL_REPLACE_PREFIX to | | | | | specify the appropriate file | | | | | server for each exported file.| +----------+-----------------+-----------------+-------------------------------+ +------------------------------------------------------------------------------+ | After the import utility is run on InstanceB, InstanceA data and all | | files referenced by DATALINK columns are imported. | +------------------------------------------------------------------------------+
A new parameter, catalog_noauth, specifies whether users are able to catalog and uncatalog databases and nodes, or DCS and ODBC directories, without SYSADM authority. The default value (0) for this parameter indicates that SYSADM authority is required. When this parameter is set to 1 (yes), SYSADM authority is not required.
TRUST_ALLCLNTS as well as TRUST_CLNTAUTH are used to determine where users are validated for the database environment. By accepting the default for TRUST_ALLCLNTS, all clients are treated as trusted clients. This means a level of security is available at the client, and that users can be validated at the client. Other options may be used to protect the server against certain clients based on their platform or database protocol.
In an MPP environment, if a database was bound using the INSERT BUF option, buffered insert will be disabled during any import operation to that database if the INSERT_UPDATE is specified.
NOCHECKLENGTHS is supported for both the ASC and the IXF file formats. This option does not have to be used with the FORCEIN modifier. If NOCHECKLENGTHS is specified, an attempt is made to import each row, even if the source data has a column definition that exceeds the size of the target table column. Such rows may be successfully imported if code page conversion causes the source data to shrink; for example, 4-byte EUC data in the source could shrink to 2-byte DBCS data in the target, and require half the space. This option is particularly useful if it is known that the source data will fit in all cases despite mismatched column definitions.
The current default priority for delimiters is: record delimiter, character delimiter, column delimiter. To protect existing customer applications that depend on the older priority, a new file type modifier, DELPRIORITYCHAR, has been created for the import utility. This modifier reverts the delimiter priorities to: character delimiter, record delimiter, column delimiter. The syntax is: db2 import ... modified by delprioritychar ...
There can be a maximum of eight opened database directory scans per process. To overcome this restriction for a batch file that issues more than eight LIST DATABASE DIRECTORY commands within a single DB2 session, convert the batch file into a shell script. The "db2" prefix generates a new DB2 session for each command.
This command affects only the partition to which a direct connection exists; the load utility operates on a single database partition only.
The COPY option is not supported for tables with DATALINK columns.
The REPLACE option is not supported for tables with DATALINK columns.
The NONRECOVERABLE clause should not be used when DATALINK columns with the FILE LINK CONTROL attribute are present in - or being added to - the table.
There is a limit on the combination of DATALINK columns and the number of DLFMs used. There can be at most 256 unique combinations of DLFMs and DATALINK columns in a table. For example, the table can have a maximum of 128 DATALINK columns if the number of DLFMs configured is two.
Load copy to tapes is not supported for DB2 servers running on SCO UnixWare 7.
The Usage Notes section incorrectly states that "summary tables" that are dependent on the tables being loaded are placed in check pending state. It should state that "summary tables defined with REFRESH IMMEDIATE", that are dependent on the tables being loaded, are placed in check pending state.
NOCHECKLENGTHS is supported for both the ASC and the IXF file formats. This option does not have to be used with the FORCEIN modifier. If NOCHECKLENGTHS is specified, an attempt is made to load each row, even if the source data has a column definition that exceeds the size of the target table column. Such rows may be successfully loaded if code page conversion causes the source data to shrink; for example, 4-byte EUC data in the source could shrink to 2-byte DBCS data in the target, and require half the space. This option is particularly useful if it is known that the source data will fit in all cases despite mismatched column definitions.
The current default priority for delimiters is: record delimiter, character delimiter, column delimiter. To protect existing customer applications that depend on the older priority, a new file type modifier, DELPRIORITYCHAR, has been created for the load utility. This modifier reverts the delimiter priorities to: character delimiter, record delimiter, column delimiter. The syntax is: db2 load ... modified by delprioritychar ... The new option is not supported in an MPP environment, nor is it supported by the AutoLoader. Load SMP exploitation is disabled when this option is specified (CPU_PARALLELISM is forced to a value of 1). The new option is mutually exclusive with the "DUMPFILE" feature, and is supported only for the DEL file format.
There is an error in the description of "DLREPORT filename". There is no default value for this parameter.
The file name must be a fully qualified file name. For example:
/home/user/report.out
If the WITHOUT DATALINK option is not specified, and the DB2 File Manager containing the DATALINK data is unavailable, the restore operation will fail.
If this option is specified, and the DB2 File Manager containing the DATALINK data is unavailable, all table spaces which contain tables with DATALINK values on the unavailable server are placed in RESTORE PENDING state.
To use tape devices, DB2 users on SCO UnixWare 7 need to specify BUFFER to be 16. The default value of BUFFER is 1024 pages. If BUFFER is set to zero, the database manager configuration parameter BACKBUFSZ must be set to 16.
When running the db2start command (or using the NET START command) to start the database manager on the Windows NT environment, the command will not return any warnings if any communication subsystem failed to start. The user should always examine the NT Event Log or the DB2DIAG.LOG for any errors that may have occurred during the running of db2start. This occurs because the database manager on a Windows NT environment is implemented as an NT service, and hence does not return an error if the service is started successfully.
There are two changes to the numbered list under the "dbinfo" argument:
17. Unique application identifier (appl_id) This field is a pointer to a C null-terminated string which uniquely identifies the application's connection to DB2. It is regenerated at each database connect. The string has a maximum length of 32 characters, and its exact format depends on the type of connection established between the client and DB2. Generally it takes the form <x>.<y>.<ts> where the <x> and <y> vary by connection type, but the <ts> is a 12 character time stamp of the form YYMMDDHHMMSS, which is potentially adjusted by DB2 to ensure uniqueness. Example: *LOCAL.db2inst.980707130144
17. Reserved field (resd2) This field is for future use. It is defined as 20 characters long.
BIGINTs can now be declared in one of three ways:
This declaration is specific to the Microsoft compiler.
This is useful when writing platform-independent applications. The DB2 UDB sqlsystm.h header file will type define sqlint64 as "__int64" on the Windows NT platform when using the Microsoft compiler; otherwise, sqlint64 will be type defined as "long long".
When specifying the WCHARTYPE CONVERT option on a Windows platform, you should note that wchar_t on Windows platforms is Unicode. Therefore, if your C/C++ compiler's wchar_t is not Unicode, the wcstombs() function call may fail with SQLCODE -1421 (SQLSTATE=22504). If this happens, you can specify the WCHARTYPE NOCONVERT option, and explicitly call the wcstombs() and mbstowcs() functions from within your program.
The COBOL precompiler supports declarations of group data items in the host variable declare section. Among other things, this provides a shorthand for referring to a set of elementary data items in an SQL statement. For example, the following group data item can be used to access some of the columns in the STAFF table of the SAMPLE database:
01 staff-record. 05 staff-id pic s9(4) comp-5. 05 staff-name. 49 l pic s9(4) comp-5. 49 d pic x(9). 05 staff-info. 10 staff-dept pic s9(4) comp-5. 10 staff-job pic x(5).
Group data items in the declare section can have any of the valid host variable types described above as subordinate data items. This includes all numeric and character types, as well as all large object types. The restriction of allowing only one level of group data items has been removed, and they can now be nested up to 10 levels. Note that you must declare VARCHAR character types with the subordinate items at level 49, as in the above example. If they are not at level 49, the VARCHAR is treated as a group data item with two subordinates, and is subject to the rules of declaring and using group data items. In the example above, staff-info is a group data item, whereas staff-name is a VARCHAR. The same principle applies to LONG VARCHAR, VARGRAPHIC and LONG VARGRAPHIC. You may declare group data items at any level between 02 and 49.
You can use group data items and their subordinates in four ways:
Method 1.
The entire group may be referenced as a single host variable in an SQL statement:
EXEC SQL SELECT id, name, dept, job INTO :staff-record FROM staff WHERE id = 10 END-EXEC.
The precompiler converts the reference to staff-record into a list, separated by commas, of all the subordinate items declared within staff-record. Each elementary item is qualified with the group names of all levels to prevent naming conflicts with other items. This is equivalent to the following method.
Method 2.
The second way of using group data items:
EXEC SQL SELECT id, name, dept, job INTO :staff-record.staff-id, :staff-record.staff-name, :staff-record.staff-info.staff-dept, :staff-record.staff-info.staff-job FROM staff WHERE id = 10 END-EXEC.
Note: | The reference to staff-id is qualified with its group name using the prefix staff-record., and not staff-id of staff-record as in pure COBOL. |
Method 3.
Here, subordinate items are referenced in a typical COBOL fashion, without being qualified to their particular group item:
EXEC SQL SELECT id, name, dept, job INTO :staff-id, :staff-name, :staff-dept, :staff-job FROM staff WHERE id = 10 END-EXEC.
As in pure COBOL, this method is acceptable to the precompiler as long as a given subordinate item can be uniquely identified. If, for example, staff-job occurs in more than one group, the precompiler issues an error indicating an ambiguous reference:
SQL0088N Host variable "staff-job" is ambiguous.
Method 4.
To resolve the ambiguous reference, partial qualification of the subordinate item can also be used.
EXEC SQL SELECT id, name, dept, job INTO :staff-id, :staff-name, :staff-info.staff-dept, :staff-info.staff-job FROM staff WHERE id = 10 END-EXEC.
Because a reference to a group item alone, as in method 1, is equivalent to a comma-separated list of its subordinates, there are instances where this type of reference leads to an error. For example:
EXEC SQL CONNECT TO :staff-record END-EXEC.
Here, the CONNECT statement expects a single character-based host variable. By giving the staff-record group data item instead, the host variable results in the following precompile-time error:
SQL0087N Host variable "staff-record" is a structure used where structure references are not permitted.
Other uses of group items which cause an SQL0087N to occur include PREPARE, EXECUTE IMMEDIATE, CALL, indicator variables, and SQLDA references. Groups with only one subordinate are permitted in such situations, as are references to individual subordinates, as in method 2, 3 and 4 above.
Indicator Tables
The COBOL precompiler supports the declaration of tables of indicator variables, which are convenient to use with group data items. They are declared as follows:
01 <indicator-table-name>. 05 <indicator-name> pic s9(4) comp-5 occurs <table-size> times.
For example:
01 staff-indicator-table. 05 staff-indicator pic s9(4) comp-5 occurs 7 times.
This indicator table can be used effectively with the first format of group item reference above:
EXEC SQL SELECT id, name, dept, job INTO :staff-record :staff-indicator FROM staff WHERE id = 10 END-EXEC.
Here, the precompiler detects that staff-indicator was declared as an indicator table, and expands it into individual indicator references when it processes the SQL statement. staff-indicator(1) is associated with staff-id of staff-record, staff-indicator(2) is associated with staff-name of staff-record, and so on.
Note: | If there are k more indicator entries in the indicator table than there are subordinates in the data item (for example, if staff-indicator has 10 entries, making k=6), the k extra entries at the end of the indicator table are ignored. Likewise, if there are k fewer indicator entries than subordinates, the last k subordinates in the group item do not have indicators associated with them. Note that you can refer to individual elements in an indicator table in an SQL statement. |
In the section "COM.ibm.db2.app.UDF", there is a list of functions that return specific DBINFO fields. To this list has been added:
public String getDBapplid() throws Exception
When running JDBC programs with JDK 1.1.4 bundled with AIX 4.3.1 under a DBCS code page, the DBCS data can not be displayed properly. This is a known problem in this version of JDK, and a fix for it (IX78932) is available.
Because a native method cannot make use of the thread library (libthread.so from UnixWare 7; see the UnixWare release notes), DB2 JDBC sample applications (DB2Appl, DB2Udf, and DB2Stp) do not work from a Version 5.2 client on UnixWare 7 connecting to a Version 5 UDB server, or a Version 5 client connecting to a FixPak 3 UDB server.
JDBC 2.0 is the latest version of JDBC from Sun. This version of JDBC has two defined parts: the core API, and the standard extension API. The core API has been released. For information on its specification, refer to:
http://www.software.ibm.com/data/db2/java
The standard extension API is currently under review. For information on the review version of its specification, you can also visit the web page above.
Features
The JDBC 2.0 driver we ship in this FixPak supports the JDBC 2.0 core API. However, due to the unavailability of certain features of the DB2 Engine, not all features defined in the specification are supported. Here is a list of main features that are supported:
Here is a list of features that are NOT supported:
Prerequisite
JDK 1.2
Platform
The JDBC 2.0 driver is currently available only on Windows 95, Windows 98, and Windows NT operating systems.
Compatibility
This version of the specification is backward compatible with the previous version (1.22). However, the DB2 JDBC 1.22 driver supports LOB types as an extension of the JDBC 1.22 specification, and this extension is not part of the new specification's backward compatibility. This means that existing JDBC applications that rely on the LOB support of the JDBC 1.22 driver may not work with the new driver. To fix the problem, we recommend that you modify the application to be compliant with the new specification. However, this solution may not be practical for every situation. As a workaround we provide a keyword, JDBCVERSION, that you can set to "122" to tell the JDBC driver you want the 1.22 version for DB2 LOB support. The default is "200" for 2.0 version behavior. You can set this keyword in db2cli.ini, or pass it in as a connection attribute in the getConnection property argument.
Installation
In this FixPak, the JDBC 1.22 driver is still the default driver. The JDBC 2.0 driver is included in the sqllib\java12 directory. To use the JDBC 2.0 driver, run the usejdbc2 batch file. This will create an sqllib\java11 directory for the 1.22 driver files. It will also back up the JDBC 1.22 driver files into this directory, and will copy the JDBC 2.0 driver files from the sqllib\java12 directory into the appropriate directories. To switch back to the JDBC 1.22 driver, execute the usejdbc1 batch file.
Release 0.70 of the DB2 UDB driver (DBD::DB2) for the Perl Database Interface (Perl DBI) is now available.
The Perl DBI is an Application Programming Interface (API) that provides database access for the Perl language. It defines a set of functions, variables, and conventions that provide a consistent database interface independent of the actual database being used.
The DBD::DB2 driver works in conjunction with the DBI to access DB2 UDB.
For additional information, see the Web page:
http://www.software.ibm.com/data/db2/perl/
Embedded SQL using the DATALINK data type is not supported.
On AIX, Solaris, HP-UX, SCO UnixWare 7, and Silicon Graphics IRIX, changes have been made to the functions that are used for run time querying of the code page and country code to be used for a database connection. They are now thread safe but can create some lock contention (and resulting performance degradation) in a multithreaded application which uses a large number of concurrent database connections.
A new environment variable has been created (DB2_FORCE_NLS_CACHE) to eliminate the chance of lock contention in multithreaded applications. When DB2_FORCE_NLS_CACHE is set to TRUE the code page and country code information is saved the first time a thread accesses it. From that point on the cached information will be used for any other thread that requests this information. By saving this information, lock contention is eliminated and in certain situations a performance benefit will be realized.
DB2_FORCE_NLS_CACHE should not be set to true if the application changes locale settings between connections. If this is done then the original locale information will be returned even after the locale settings have been changed. In general, multithreaded applications will not change locale settings. This ensures that the application remains thread safe.
Changes have been introduced (a) to give the user explicit control over scratchpad duration, and (b) to avoid inconsistencies in the duration of scratchpads, for table functions written in:
This difference has led to the possibility of getting inconsistent results, for example when performing a join.
The following changes have been introduced for User Defined Functions (UDFs) which are table functions:
Or, you can elect to have the scratchpad content be preserved across OPEN calls, by specifying FINAL CALL. If you do this, you need not release resources during the CLOSE call processing, because a FINAL call will be made to the table function at end-of-statement. This FINAL CALL is new with this change, as is the balancing FIRST call which takes place before the first OPEN call. These two new call types occur only on table functions which specify FINAL CALL.
| | Point | | NO FINAL CALL | FINAL CALL in | | LANGUAGE JAVA | LANGUAGE JAVA scan | | SCRATCHPAD | SCRATCHPAD time V | | ------------+------------------------------+------------------------------ Before the | No calls. | Class constructor is called first OPEN | | (means new scratchpad). for the | | UDF Method is called with table func. | | "FIRST" call. | |- - - - - - - - - - - - - - - | | Constructor initializes | | class and scratchpad | | variables. | | Method connects to the Web | | Server. | | ------------+------------------------------+------------------------------ At each | Class constructor is called | UDF Method is called with OPEN of the | (means new scratchpad). | "OPEN" call. table func. | UDF Method is called with | | "OPEN" call. | |- - - - - - - - - - - - - - - |- - - - - - - - - - - - - - - | Constructor initializes | Method opens the scan for | class and scratchpad | whatever Web data it wants. | variables. | (Might be able to avoid | Method connects to the Web | reopen after a CLOSE repo- | Server, and opens the scan | sition, depending on what | for Web data. | is saved in scratchpad!) | | ------------+------------------------------+------------------------------ At each | UDF method is called with | UDF Method is called with FETCH for | "FETCH" call. | "FETCH" call. new row of |- - - - - - - - - - - - - - - |- - - - - - - - - - - - - - - table func | Method fetches and returns | Method fetches and returns data. | next row of data, or EOT. | next row of data, or EOT. | | ------------+------------------------------+------------------------------ At each | UDF method is called with | UDF Method is called with CLOSE of the| "CLOSE" call. | "CLOSE" call. table func. | "close" method is called if | | it exists for class. | |- - - - - - - - - - - - - - - |- - - - - - - - - - - - - - - | Method closes its Web scan | Method might reposition to | and disconnects from the | the top of the scan, or | Web Server. | close the scan. It can | "close" does not need to do | save any state in the | anything. | scratchpad, which will | | persist. | | ------------+------------------------------+------------------------------ After the | No calls. | UDF Method is called with last CLOSE | | "FINAL" call. of the table| | "close" method is called if func. | | it exists for class. | |- - - - - - - - - - - - - - - | | Method disconnects from the | | Web Server. | | "close" does not need to do | | anything. | | ------------+------------------------------+------------------------------
Some notes on this table:
public int getCallType () throws Exception
Defines for the new call types are added to sqludf.h, the UDF include file.
UDB customers who have existing table functions are advised to examine the above options and make appropriate changes:
Detailed information changes will appear in the "SQL Reference" and the "Embedded SQL Programming Guide" (ESPG) when they are next updated. Meanwhile, a "Technote" describing these changes in more detail will be made available in the DB2 Technical Library on the World Wide Web. Use the search argument "FINAL CALL".
The following sections describe the detailed changes to the "Embedded SQL Programming Guide".
In Chapter 6 of the "Embedded SQL Programming Guide", under the heading "User-Defined Functions (UDF)", in the Example "Table Function Returning Document IDs", "FINAL CALL" should now read "NO FINAL CALL", and in the following paragraph note that FINAL CALL need not be specified for each table function.
In Chapter 7 of the "Embedded SQL Programming Guide", under the heading "Interface between DB2 and a UDF", in the section "The Arguments Passed from DB2 to a UDF", in the description for the scratchpad argument, the text beginning "The actual..." should be replaced by:
For table functions, the scratchpad is initialized as above only for the very first call to the UDF if FINAL CALL is specified on the CREATE FUNCTION. This "very first call" has type FIRST. After this call, the scratchpad content is totally under control of the table function.
If NO FINAL CALL was specified or defaulted, then the scratchpad is initialized as above for each OPEN call, and the scratchpad content is completely under control of the table function between OPEN calls. (This can be very important for a table function used in a join or subquery. If it is necessary to maintain the content of the scratchpad across OPEN calls, then FINAL CALL must be specified in your CREATE FUNCTION statement. With FINAL CALL specified, in addition to the normal OPEN, FETCH and CLOSE calls, the table function will also receive FIRST and FINAL calls, for the purpose of scratchpad maintenance and resource release.)
Also, in the description of the call-type argument, the second sentence should be replaced by the following sentence:
For external scalar functions this argument is only present if FINAL CALL is specified in the CREATE FUNCTION statement, but for external table functions it is ALWAYS present.
After that first paragraph, add the following new paragraph:
Note that even though all the current possible values are listed below, your UDF should contain a switch or case statement which explicitly tests for all the expected values, rather than containing "if A do AA, else if B do BB, else it must be C so do CC" type logic. This is because it is possible that additional call types may be added in the future, and if you don't explicitly test for condition C you will have trouble when new possibilities are added.
In the same section, where the values for the call-type argument are explained for Table Functions, the text should be amended as follows:
Under the heading "Summary of Argument Use", for the argument scratchpad and call-type, add the text ", and optionally FIRST and FINAL" in both cases.
In Chapter 7 of the "Embedded SQL Programming Guide", under the heading "Table Function Considerations", under the first bullet, there are now five call types instead of three: add FIRST and FINAL to the list.
Also, add a new bullet in this section as well, which should read as follows:
The error processing model for table function calls is as follows:
In the "Weather Table Function" example in Chapter 7, the UDF definition should specify NO FINAL CALL, and note that FINAL CALL is no longer mandatory for a table function (that is, ignore the statement that says it is mandatory).
In Chapter 15 of the "Embedded SQL Programming Guide", entitled "Programming in Java", under the heading "Creating Java UDFs and Stored Procedures", the second sentence of the second paragraph should be changed to read as follows:
Once you create and register these UDFs and stored procedures, and place the Java classes in the correct file location, described in "Where to Put Java Classes" on page 514, you can then (for UDFs) reference them in your SQL, or (for stored procedures) call them from your application written in any supported language.
And the sentence following that one should be removed.
In Chapter 15 of the "Embedded SQL Programming Guide", under the heading "Creating and Using Java User-Defined Functions", the fourth sentence of the first paragraph should be changed to read:
You can then refer to it in the SQL of your application.
In the subheading "Changing How a Java UDF Runs" under that same heading, the first three paragraphs should be replaced with the following five:
Typically, DB2 calls a UDF many times, once for each row of an input or result set in a query. If SCRATCHPAD is specified in the CREATE FUNCTION statement of the UDF, DB2 recognizes that some "continuity" is needed between successive invocations of the UDF, and therefore the implementing Java class is not instantiated for each call, but generally speaking once per UDF reference per statement. Generally it is instantiated before the first call and used thereafter, but may for table functions be instantiated more often-- see the NO FINAL CALL execution model in the subsection which follows this one.
If, however, NO SCRATCHPAD is specified for a UDF, either a scalar or table function, then a clean instance is instantiated for each call to the UDF, by means of calling the class constructor.
A scratchpad may be useful for saving information across calls to a UDF. Whereas for C, C++ and OLE UDFs the scratchpad is maintained by DB2, in Java the UDF can simply use instance variables. For table functions using a scratchpad, you can control when you get a new instance by use of the FINAL CALL / NO FINAL CALL option on the CREATE FUNCTION statement, as indicated by the execution models in the subsection which follows this one.
For scalar functions, you use the same instance for the entire statement. Another choice you have is to use the DB2 provided Java methods getScratchpad() and setScratchpad(), in which case you will be using the DB2-managed scratchpad. The ability to achieve continuity between calls to a UDF by means of a scratchpad, is controlled by the SCRATCHPAD / NO SCRATCHPAD option of CREATE FUNCTION, regardless of whether the DB2 scratchpad or instance variables are used.
Please note that every reference to a Java UDF in a query is treated independently, even if the same UDF is referenced multiple times. This is the same as what happens for OLE, C and C++ UDFs as well. At the end of a query, if you specify the FINAL CALL option for a scalar function then the object's close() method is called. For table functions the close() method will always be invoked as indicated in the subsection which follows this one. If you do not define a close() method for your UDF class, then a stub function takes over and the event is ignored.
And in the next paragraph, which begins "If you specify the ALLOW PARALLEL ...", the last sentence should be removed.
Following the subsection "Changing How a Java UDF Runs", add a new subsection at the same level, entitled "Table Function Execution Model for Java", which contains the following:
For table functions written in Java, it is important to understand what happens at each point in DB2's processing of a given statement which is significant to the table function. The table which follows details this information. The bottom part of each box hints what the code might be written to do for a typical table function which pulls some information in from the Web). Covered are both the NO FINAL CALL and the FINAL CALL cases, assuming SCRATCHPAD in both cases.
| | Point | | NO FINAL CALL | FINAL CALL in | | LANGUAGE JAVA | LANGUAGE JAVA scan | | SCRATCHPAD | SCRATCHPAD time V | | ------------+------------------------------+------------------------------ Before the | No calls. | Class constructor is called first OPEN | | (means new scratchpad). for the | | UDF Method is called with table func. | | "FIRST" call. | |- - - - - - - - - - - - - - - | | Constructor initializes | | class and scratchpad | | variables. | | Method connects to the Web | | Server. | | ------------+------------------------------+------------------------------ At each | Class constructor is called | UDF Method is called with OPEN of the | (means new scratchpad). | "OPEN" call. table func. | UDF Method is called with | | "OPEN" call. | |- - - - - - - - - - - - - - - |- - - - - - - - - - - - - - - | Constructor initializes | Method opens the scan for | class and scratchpad | whatever Web data it wants. | variables. | (Might be able to avoid | Method connects to the Web | reopen after a CLOSE repo- | Server, and opens the scan | sition, depending on what | for Web data. | is saved in scratchpad!) | | ------------+------------------------------+------------------------------ At each | UDF method is called with | UDF Method is called with FETCH for | "FETCH" call. | "FETCH" call. new row of |- - - - - - - - - - - - - - - |- - - - - - - - - - - - - - - table func | Method fetches and returns | Method fetches and returns data. | next row of data, or EOT. | next row of data, or EOT. | | ------------+------------------------------+------------------------------ At each | UDF method is called with | UDF Method is called with CLOSE of the| "CLOSE" call. | "CLOSE" call. table func. | "close" method is called if | | it exists for class. | |- - - - - - - - - - - - - - - |- - - - - - - - - - - - - - - | Method closes its Web scan | Method might reposition to | and disconnects from the | the top of the scan, or | Web Server. | close the scan. It can | "close" does not need to do | save any state in the | anything. | scratchpad, which will | | persist. | | ------------+------------------------------+------------------------------ After the | No calls. | UDF Method is called with last CLOSE | | "FINAL" call. of the table| | "close" method is called if func. | | it exists for class. | |- - - - - - - - - - - - - - - | | Method disconnects from the | | Web Server. | | "close" does not need to do | | anything. | | ------------+------------------------------+------------------------------
Some notes on this table:
public int getCallType () throws Exception
In Chapter 15 of the "Embedded SQL Programming Guide", under the heading "Classes for Java Stored Procedures and UDFs", and further under the subheading "COM.ibm.db2.app.UDF", the paragraph describing the "public void close()" method should be changed to read as follows:
This method is called by the database for scalar functions at the end of a statement evaluation, if the UDF was created with the FINAL CALL option, and is analogous to the final call for a C function. For table functions, it is called after the CLOSE call to the UDF method (if NO FINAL CALL is coded or defaulted), or after the FINAL call (if FINAL CALL is coded). If a Java UDF class does not implement this function, a no-op stub function will handle and effectively ignore this event.
Immediately following the description for "public void close()", add material for the new method "public int getCallType() throws Exception", which reads as follows:
This function is used by table function UDF methods to find out the call type for a particular call. It returns a value as follows (symbolic defines are provided for these values in the Com.ibm.db2.app.UDF class definition):
In chapter 7, in the section entitled "Other Coding Considerations", and the subsection entitled "Hints and Tips", there should be a new item 15 added to the list as follows:
The error processing model for scalar UDFs which are defined with the FINAL CALL specification is as follows:
This means that if an error is returned on a FIRST call, the UDF must clean up before returning, because no FINAL call will be made.
If FINAL CALL is not specified for the scalar UDF, then only NORMAL calls are made, and they cease if an error is returned.
There should also be a new item 16 added to the list as follows:
The error processing model for table functions is defined in the "Table Functions Considerations" section of this chapter.
DB2-Connect does not support SELECT statements selecting LOB columns in a compound SQL block.
The following changes related to the Embedded SQL Programming Guide have been documented in the Release Notes for FixPaks prior to this release.
In chapter 1 of the ESPG, in the first paragraph under the heading "DB2 Bind File Dump Tool-db2bfd", there is a reference to "Version 3"; this should be changed to Version 5.
In chapter 4, the section "Join Strategies in a Partitioned Database" has several subsections showing examples of the different join strategies. In the pieces of artwork, all occurrences of "LINEITEMS" should be changed to "LINEITEM". Also, the subsection "Broadcast Inner-Table Joins", has incorrect text in the example shown. The text of the example should be:
The LINEITEM table is sent to all database partitions that have the ORDERS table. Table queue q3 is broadcast to all database partitions of the outer table.
The following changes pertain to chapter 7:
short indicator1, // input indicator short indicator2, // output indicator
should be changed to
short *indicator1, // input indicator short *indicator2, // output indicator
02000 Only valid for the FETCH call to table functions, it means there are no more rows in the table.
If this is a table function, this field is a pointer to an array of short integers which is dynamically allocated by DB2. If this is a scalar function, this pointer is null.
other Invalid locator or other error (for example, memory error). The value that is returned for these cases is the SQLCODE corresponding to the error condition. For example, -423 means invalid locator. Please note that before returning to the UDF with one of these "other" codes, DB2 makes a judgement as to the severity of the error. For severe errors, DB2 remembers that the error occurred, and when the UDF returns to DB2, regardless of whether the UDF returns an error SQLSTATE to DB2, DB2 takes action appropriate for the error condition. For non-severe errors, DB2 forgets that the error has occurred, and leaves it up to the UDF to decide whether it can take corrective action, or return an error SQLSTATE to DB2.
-3 Bad input value is provided to the API. This includes: - udfloc_p (address of locator) is zero - return_len_p (address of where to put length) is zero
-3 Bad input value is provided to the API. This includes: - udfloc_p (address of locator) is zero - start is less than 1 - length is negative - buffer_p (buffer address) is zero - return_len_p (address of where to put length) is zero
-3 Bad input value is provided to the API. This includes: - udfloc_p (address of locator) is zero - length is negative - buffer_p (buffer address) is zero
-3 Bad input value is provided to the API. This includes: - udfloc_p (address of locator) is zero - loc_type is not one of the three valid values - loc_p (address of where to put locator) is zero.
-3 Bad input value is provided to the API. This includes: - udfloc_p (address of locator) is zero
This API can be used to build very large LOB values in a piecemeal manner. In cases where a large number of appends is used to build a result, the performance of this task can be improved by: - allocating a large application control heap (APP_CTL_HEAP_SZ is the database manager configuration parameter) - doing fewer appends of larger buffers; for example, instead of doing 20 appends of 50 bytes each, doing a single 1000 byte append. SQL applications which build many large LOB values via the sqludf_append() API may encounter errors caused by limitations on the amount of disk space available. The chance of these errors happening can be reduced by: - using larger buffers for the individual appends - doing frequent COMMITs between statements - in cases where each row of a SELECT statement is building a LOB value via this API, using a CURSOR WITH HOLD and doing COMMITs between rows.
Use this API to free any locators that were created with the sqludf_create_locator() API, and which were used only for internal manipulation. It is NOT NECESSARY to free locators passed into the UDF. It is NOT NECESSARY to free any locator created by the UDF via sqludf_create_locator() if that locator is passed out of the UDF as an output.
The following notes apply to the use of these APIs: 1. A UDF which is defined to return a LOB locator has several possibilities available to it. It can return: a. an input locator passed to it b. an input locator passed to it which has been appended to via sqludf_append() c. a locator created to via sqludf_create_locator(), and appended to via sqludf_append(). 2. A table function can be defined as returning one or more LOB locators. Each of them can be any of the possibilities discussed in the preceding item. It is also valid for such a table function to return the same locator as an output for several table function columns. 3. A LOB locator passed to a table function as an input argument remains alive for the entire duration of the row generation process. In fact, the table function can append to a LOB using such a LOB locator while generating one row, and see the appended bytes on a subsequent row. 4. The internal control mechanisms used to represent a LOB which originated in DB2 as a LOB locator output from a UDF (table or scalar function), take 1950 bytes. For this reason, and because there are limitations on the size of a row which is input to a sort, a query which attempts to sort multiple such LOBs which originated as UDF LOB locators will be limited to (at most) two such values per row, depending on the sizes of the other columns involved. The same limitation applies to rows being inserted into a table.
In chapter 9, in the section "Accessing DRDA Servers", the second paragraph in point 1 should read:
For IBM products, consult the IBM SQL Reference, Version 5.2 (S10J-8165), before you start coding.
The following changes pertain to chapter 15:
class ... implements COM.ibm.db2.app.UDF { ... }
should be changed to
class ... extends COM.ibm.db2.app.UDF { ... }
The same is true for "COM.ibm.db2.app.StoredProc".
COM.ibm.db2.app.Blob public static Blob new() throws Exception; COM.ibm.db2.app.Clob public static Clob new() throws Exception;
These were renamed and moved to the new class COM.ibm.db2.app.Lob as the following functions:
COM.ibm.db2.app.Lob public static Blob newBlob() throws Exception; COM.ibm.db2.app.Lob public static Clob newClob() throws Exception;
SQL Type Java Type (UDF) Java Type (Stored Procedure) LONG VARCHAR Clob Clob LONG VARGRAPHIC Clob Clob
These should be changed to:
LONG VARCHAR String String LONG VARGRAPHIC String String
The following information should be added after the second paragraph in the section "Java UDFs and Stored Procedures":
When creating a stored procedure in the Java language, you must use the CREATE PROCEDURE statement to register the procedure to the system catalog table SYSCAT.PROCEDURES. For more information, refer to the section on the CREATE PROCEDURE statement in the SQL Reference.
The following is a limitation of the Java UDF/stored procedure support: On a mixed code page database server, Java user-defined functions and stored procedures cannot use CLOB type arguments, because random access on character boundaries on large mixed code page strings has not yet been implemented. Full support for all LOB types is intended for SBCS databases. For mixed databases, support is intended for the BLOB and the DBCLOB types. As a workaround, applications running on a mixed database system should convert CLOB arguments to DBCLOB, LONG VARGRAPH, or LONG VARCHAR types. For UDFs, this can be done with the CAST operator.
In appendix B, which describes the sample programs, references (in table 24) to the loblocud, db2uext2, and autoloader samples should be removed. For additional information, also refer to 8.13.6 , Revisions to Sample Programs and 8.13.7 , Other File Changes.
In the index, java_heap_sz is incorrectly referred to as java_heap_size.
Revisions to sample programs were made for various languages on the following platforms: AIX, HP-UX, OS/2, Solaris, Windows NT, and Windows 95. The revisions are to correct some code errors and to make better use of API calls within the programs. Some programs have been made generic for all platforms: before they were only available on specific platforms; for example, 'makeapi.sqc'. Also, the interface has been improved for several programs to better guide the user on program input, and to give more informative output. Many redundant files have had their file contents deleted, replaced by the comment "This file is not needed". Where a new file replaces the redundant file's functionality, a reference to this is also provided.
The following is a list of the updated files. Each entry contains the name of the file, its language, and the platforms for which it has been changed. Where the same file has been revised in more than one language, one file name is given with multiple extensions, as in "dbconf.c/.cbl/.f/.for". "COBOL" refers to both IBM and Micro Focus COBOL, unless otherwise indicated. "All platforms" refers to the following platforms only: AIX, HP-UX, OS/2, Solaris, Windows NT, and Windows 95. In some situations, a file may not be available on a specific platform; if you are not sure, refer to the README file in the appropriate samples directory. The file entry also contains a comment describing the nature of the change. The changes are as follows:
Fixed memory leak and improved user interface.
The correct file version is now available for OS/2.
Fixed 'version' parameter error in API call. Improved the user interface. Recoded Micro Focus COBOL version to make it generic for all platforms.
Recoded to make generic for all platforms.
Recoded to make generic for all platforms.
Redundant files: contents deleted.
Fixed 'version' parameter error in API call.
Recoded Micro Focus COBOL version to make it generic for all platforms. The correct REXX version is now available for OS/2.
Improved user interface.
Fixed 'version' parameter error in API call.
Fixed 'version' parameter error in API call.
This program is now available for OS/2 FORTRAN.
This program is now available for OS/2 FORTRAN.
This is a client program; the server program is fillsrv.sqc. These programs have been extensively rewritten, combining the functionality of the supporting programs da_manip, system, and tabinfo (which are now redundant). The use of APIs, demonstration of SQLDA structures, and user interface have all been improved.
This is a server program; the client program is fillcli.sqc (see that entry for more information).
Redundant file: contents deleted, as compiler does not need definition file for stored procedures.
Redundant file: contents deleted.
Redundant file: contents deleted.
This program is AIX-specific: file contents were deleted on the HP-UX, OS/2, Solaris, Windows NT, and Windows 95 platforms.
The correct version is now available for OS/2.
This replaces the specific program files for AIX and OS/2 with a generic program file that is now available for all platforms.
Fixed 'version' parameter error in API call.
Fixed 'version' parameter error in API call.
Redundant file: contents deleted, as compiler does not need definition file for stored procedures.
Redundant file: contents deleted.
Redundant file: contents deleted.
Improved interface provides more information and eliminates extraneous 'load file' error.
The non-embedded SQL program rechist.c has been replaced by the embedded SQL program rechist.sqc. The new file corrects a database connect error that occurred with rechist.c.
Fixed 'version' parameter error in API call.
Redundant files: contents deleted.
Redundant files: contents deleted.
Improves use of APIs to now demonstrate multiple table spaces.
Corrected error in API call to shared library.
The makefiles, where needed, have been updated to reflect the changes described in the previous section. For example, the C makefiles no longer create and link in the object files da_manip.obj, system.obj and tabinfo.obj for the stored procedure fillsrv.
The OS/2 command files for IBM COBOL and FORTRAN, and their respective makefiles, now use a stack size of 64000: the stack size link option for COBOL is now "/ST:64000", and for FORTRAN it is now "stack=64000". It was found that some of the stored procedure programs needed a larger stack size to run correctly.
The following command and batch file changes have been made:
Redundant file: contents deleted.
The correct version of this file is now available.
A remote system may not be detected if the client is on a different token ring than the server. Both the remote system and the system that initiates the discovery request must be on the same token ring in order for search discovery to work.
The TCP/IP host name returned to a client by SEARCH discovery is the same host name that is returned by the DB2 server system when a TCP/IP host name command is issued. On the client, the IP address that this host name maps to is determined by either the TCP/IP domain name server (DNS) configured on the client machine or, if a DNS is not configured, a mapping entry in the client hosts file. If multiple adapter cards are configured on the DB2 server system, ensure that TCP/IP is configured on the server to return the correct host name when a host name command is issued, and that the DNS or local client hosts file on the client maps the host name to the desired IP address.
Under the "Action" section, references to "sqllib/profile" should be changed to "sqllib/db2profile".
The following reason code for message SQL0270N has been changed:
Under "Cause": 23 REPLICATED can only be specified for a summary table. Under "Action": 23 Either remove the REPLICATED specification or ensure that the CREATE TABLE is specified in a summary table definition.
The following two reason codes have been added to message SQL0270N:
Under "Cause": 30 Typed tables and typed views cannot be defined in a CREATE SCHEMA statement. 31 A partitioning key cannot be defined with more than 500 columns. Under "Action": 30 Issue the CREATE statement for the typed view or typed table outside of the CREATE SCHEMA statement. 31 Reduce the number of columns in the partitioning key.
Message: LOB or DATALINK data in position "<position-number>" of the select-list cannot be sent using DRDA protocol.
Cause: LOB or DATALINK data, or distinct types based on LOB or DATALINK data, cannot be flowed using the DRDA protocol. "<position-number>" is the position of the element with a LOB or DATALINK data type in the select-list.
The statement cannot be processed.
Action: Ensure that LOB or DATALINK data is not being retrieved when using DRDA, by excluding them from the select-list. For LOB data, try selecting smaller pieces of the value by using CHAR(SUBSTR(CLOB)). For DATALINK data, try using a function such as DURLCOMPLETE.
sqlcode: -351 sqlstate: 56084
Message: LOB or DATALINK data in position "<position-number>" of the input-list cannot be sent using DRDA protocol.
Cause: LOB or DATALINK data cannot be flowed using the DRDA protocol. "<position-number>" is the position of the element with a DATALINK data type in the input-list.
The statement cannot be processed.
Action: Ensure that LOB or DATALINK data is not being sent when using DRDA, by excluding it from the input-list. An alternative is to try building a value indirectly using the DLVALUE function.
sqlcode: -352 sqlstate: 56084
Message: The DB2 File Manager "<server-name>" is not currently available. Reason code = "<reason-code>".
Cause: A reference to a DATALINK value required access to the DB2 File Manager "<server-name>". The DB2 File Manager is not currently available as indicated by the reason code.
01 The DB2 File Manager in a DATALINK value is not available. 02 The database server, instance, or database from which the operation was attempted is not registered with the DB2 File Manager in a DATALINK value. 03 Restart recovery is pending or in progress on the DB2 File Manager. 04 The DB2 File Manager in a DATALINK value is registered with the database but is an unknown server. 05 An error occurred while communicating to the DB2 File Manager in a DATALINK value.
The statement cannot be processed.
Action: The action depends on the reason code as follows.
01 The DB2 File Manager or a communication link may be down. Wait a while and try again, or check with a system administrator for the DB2 File Manager. If the problem persists, disconnect the application from the database, and retry after connecting again. 02 Register the database server, instance, or database with the DB2 File Manager. 03 Wait for the restart recovery to complete on the DB2 File Manager. If the problem persists, check if the registration mentioned in the text for reason code 02 above has been done. This might be one of the reasons for the restart recovery not completing successfully. 04 Check that the DB2 File Manager configured to the database is available on the network. 05 Check that the DB2 File Manager and the communication link are up. If the problem persists, disconnect the application from the database, and retry after connecting again. sqlcode: -357 sqlstate: 57050
The description for SQL0432N has been changed to include the same error for reference types:
Message: A parameter marker cannot have the user-defined type name or reference target type name "<udt-name>".
Cause: A parameter marker in the statement has been determined to have the user-defined type "<udt-name>", or a reference type with the target type "<udt-name>", based on the context in which it is used. A parameter marker cannot have a user-defined type or reference type as its data type, unless it is part of an assignment (VALUES clause of INSERT, or SET clause of UPDATE), or it is being explicitly cast to a user-defined distinct data type, or reference data type, using the CAST specification. The statement cannot be processed.
Action: Use an explicit cast to the user-defined distinct data type, or reference data type for the parameter marker. An alternative is to cast the columns that are user-defined distinct data types to their corresponding source data type, or columns that are reference data types to their corresponding representation type.
sqlcode: -432 sqlstate: 42841
The text for SQLSTATE 42841 has been changed to:
42841 A parameter marker cannot be a user-defined type or a reference type.
Modified Cause: The maximum number of columns permitted for each table is based on the page size of the table space. The limits for tables are:
The number of columns may have exceeded the maximum allowed in a temporary table with any of the temporary table spaces available to process the statement.
New Action: If the error occurs processing a query, create a temporary table space of the largest page size supported if one does not already exist. If such a table space already exists, eliminate one or more columns from the temporary table.
Reason code 3 for message SQL0903N has been changed to:
3. One of the DB2 File Managers participating in the unit of work was unable to prepare to commit.
There is a new reason code for message SQL0911N:
72. Transaction has been rolled back because of an error concerning a DB2 File Manager involved in the transaction.
The following information has been added to the "Cause" section:
The error tokens will not be returned for CLI/ODBC applications. CLI/ODBC applications can use SQLGetDiagRec, SQLGetDiagField, or SQLError APIs to get more information about each error.
If message SQL5048N or SQL0901N is encountered when using DB2 Version 5.2 Change Password support, the most likely cause of the error is that the release level of the database client is not supported by the release level of the database server.
These messages may be returned from a CONNECT or ATTACH request for which a new password was specified and AUTHENTICATION=CLIENT is being used. If so, verify that the local database catalog entry for the named database alias does not specify AUTHENTICATION=SERVER.
If the above errors are returned from a CONNECT OR ATTACH request with a new password for a DB2 Common Server Version 2 or DDCS Version 2 multi-user gateway, the local database catalog entry for the database alias must specify AUTHENTICATION=CLIENT.
SQL5156N The value of the database manager configuration parameter TRUST_ALLCLNTS must be one of NO, YES, or DRDAONLY.
Cause: The allowed values for the configuration parameter TRUST_ALLCLNTS are:
The requested change is not made.
Action: Resubmit the command with a valid value for TRUST_ALLCLNTS.
The following reason code has been added to message SQL20053N:
Under "Cause": 6. The table or view in the FROM clause of every subview in a typed view hierarchy must use OUTER if any subview uses OUTER. Under "Action": 6. If this is the first subview to use OUTER, change the FROM clause so that OUTER is not used. If the superview uses OUTER, include OUTER in the FROM clause of the subview.
Following is the updated message text for SQL20056N:
Processing on DB2 File Manager "<server-name>" encountered an error. Reason code = "<reason-code>". Additional diagnostic information "<diag-token>".
Note: | Currently "<diag-token>" will always be set to "UNKNOWN". |
The following reason code for change password support, when it fails because of an invalid user name or password, has been added to message SQL30083N:
Under "Cause": 24. Username and/or password invalid. This is returned when a change password fails due to username or original password errors.
When you install DB2 with documentation on an NEC PC98 running Windows 95, the installation stops at 96%. To complete the installation:
If you have installed DB2 with documentation on an NEC PC98 running Windows 95, the uninstall operation stops before it has completed. To complete the uninstall operation:
Since HP is discontinuing support for NETWARE products, IPX/SPX is not supported on HP-UX Version 11.0.
PTF (Program Temporary Fix) 3 for ADSTAR Distributed Storage Manager (ADSM) Version 3 is required for an OS/2 client to work with DB2.
If you are running DB2 UDB for OS/2 on OS/2 Warp Server, Version 4 in a Double Byte Character Set (DBCS) environment, double-clicking on the Control Center icon returns system error SYS3175. However, after the error occurs once, if you double-click on the icon again, the Control Center opens.
To stop the SYS3175 message from coming up in the first place, install OS/2 FixPak WX03005 or a later version.
Two phase commit using SNA is not supported when connecting from DB2 for AS/400 to DB2 Universal Database Version 5.2 for AIX. It is supported when connecting from DB2 for AS/400 to DB2 Universal Database Version 5.2 for OS/2 and Windows NT.
The TP Name has to be unique among DB2 instances residing on the same machine. If this is not the case, for more than one DB2 instance listening to the same TP Name, both APPC and TCPIP will assign the first connection to the correct DB2 instance and will assign the second connection to the incorrect DB2 instance.
When accessing a DB2 Universal Database server from a DB2 for OS/390 host the following DB2 for OS/390 PTFs are required:
The ability to change a password at a DB2 server from a DB2 client is limited to the following DB2 servers:
Chapter 15 of the Quick Beginnings for UNIX** book refers to the Nodelock Administration Tool (NAT), which is part of the License Use Management Administration Runtime Kit (LUM ARK). To count concurrent users on AIX, the LUM ARK must be installed on your system with the NAT. On some versions of LUM ARK, the NAT is in an optional file set. Please refer to your LUM documentation for more information. You can visit the IBM License Use Management Web site at:
http://www.software.ibm.com/is/lum
to download the latest LUM ARK and publications. For your convenience, a copy of the LUM ARK is included on the DB2 CD in the /cdrom/LUM directory.
The Quick Beginnings for DB2 Extended Enterprise Edition for Windows NT book states that a domain user name is required to create a database partition server, and is used to log on when DB2 is started as a Windows NT service. Note, however, that if raw devices are used, the domain user name must also have local administrative authority, or the user will not have access to the raw devices.
When running the db2start command (or using the NET START command) to start the database manager on the Windows NT environment, the command will not return any warnings if any communication subsystem failed to start. The user should always examine the NT Event Log or the DB2DIAG.LOG for any errors that may have occurred during the running of db2start. This occurs because the database manager on a Windows NT environment is implemented as an NT service, and hence does not return an error if the service is started successfully.
As a result of optimizer enhancements, the plan for any query could change, if rebound. Although every precaution is taken to ensure that the changed plan is at least as good as the plan chosen by the previous version, there is no guarantee of that, since certain changes could cause a different plan to be chosen, and it is not possible to test the virtually unlimited number of scenarios that a user could create.
It is recommended that an EXPLAIN be run on all critical queries before migrating versions, in case those queries get rebound to a new plan. If this is done, both the plan and the configuration parameters upon which it is based (under the old release) will be known. This, in turn, will help the user identify the source of any differences under the new version, and restore the conditions for the old plan, if desired.
The following information on the db2upd52 command needs to be added to the Quick Beginnings books that were not refreshed for DB2 Version 5.2. The books are:
DB2 Extended Enterprise Edition for UNIX Quick Beginnings DB2 Personal Edition Quick Beginnings Quick Beginnings for OS/2 Quick Beginnings for Windows NT
Updating Catalog Table Data
|
You only need to run the db2upd52 command if you are upgrading
from Version 5 to Version 5.2 of DB2. If you upgrade from
Version 2 to Version 5.2 of DB2, database migration will update the
data types and the SYSFUN schema.
|
For DB2 Version 5.2, the SYSIBM.SYSDATATYPES catalog table includes a new data type, BIGINT, and other associated functions in the SYSFUN schema. To support these new features, we recommend that you run the db2upd52 command for each database owned by a DB2 instance being upgraded to Version 5.2. The command adds the new SYSFUN functions signatures that support the BIGINT data type as follows:
create function sysfun.abs(bigint) returns bigint ... create function sysfun.absval(bigint) returns bigint ... create function sysfun.ceil(bigint) returns bigint ... create function sysfun.ceiling(bigint) returns bigint ... create function sysfun.floor(bigint) returns bigint ... create function sysfun.mod(bigint,bigint) returns bigint ... create function sysfun.power(bigint,bigint) returns bigint ... create function sysfun.round(bigint,integer) returns bigint ... create function sysfun.sign(bigint) returns bigint ... create function sysfun.trunc(bigint,integer) returns bigint ... create function sysfun.truncate(bigint,integer) returns bigint ...
The db2upd52 command also adds the REAL data type to the SYSIBM.SYSDATATYPES catalog table and the function signature for UCASE (SYSFUN.UCASE) to the SYSIBM.SYSFUNCTIONS table, if it is missing. For information on the syntax of the db2upd52 command, refer to the Command Reference.
For more information about the db2upd52 command, refer to Quick Beginnings for DB2 Universal Database for UNIX Version 5.2, Chapter 6, "Upgrading from DB2 Release 5.0 to Release 5.2".
If you are using Lotus Domino Go webserver and get the following error:
java.sql.SQLException: DB2 Driver error allocating environment handle, rc=-1
Restart the web server and specify the db2instance, as follows:
startsrc -e "DB2INSTANCE=yourInstanceName" -s httpd
Also ensure that CLASSPATH and LD_LIBRARY_PATH are set correctly in the httpd.conf file (using the directives JavaClassPath and JavaLibPath respectively). Ensure that they point to the UDB Version 5.2 files that are required.
DB2 does not support Java stored procedures and Java user-defined functions (UDFs) accessing DB2 databases on HP-UX and SCO UnixWare servers.
The abbreviated name of the Brazilian Portuguese locales for DB2 for Solaris and DB2 for SCO UnixWare 7 has been changed from "pt" to "pt_BR".
Client Configuration Export Profiles created at clients with FixPak 7 installed will only be importable at clients which also have FixPak 7 installed.
DB2 Data Links Manager (formerly known as "DB2 File Manager") can only be used with DB2 for AIX Enterprise Edition. The Data Links Manager cannot be used with databases on operating systems other than AIX, and cannot be used with DB2 Enterprise - Extended Edition (EEE) for AIX. DATALINK values cannot point to a DB2 EEE system; moreover, the DATALINK data type cannot be used on EEE systems to reference files on a DB2 Data Links Manager.
The DLFM administration user ID must be "dlfm" for this release.
An SQL SELECT on a DATALINK column which was defined with the "READ PERMISSION DATABASE" option will return the URL or file name with an access token embedded within the name. This access token has an expiration time, as defined by the DL_EXPINT database configuration parameter. When using an NFS client to access such files, you may find that the access token does not expire appropriately. This can be due to caching done by the NFS client. It has been observed that the use of long file names (greater than 12 characters) may mitigate this problem.
Whenever you remotely mount a DLFS file system via NFS, you may also want to include the mount option "noac". For example:
mount -o noac birdcage:/dlink /dlink
Datalinks clients cannot be:
The command for DLFM setup has been changed to accept a dlfm_db configuration file. If the configuration file contains the setup options, the configuration options in the file are used instead of the defaults. The file is optional. If not specified, the defaults are used. The command syntax is now as follows:
dlfm setup [<dlfm_db_config_file>]
Following is a sample configuration file named dlfm_db.cfg:
# Sample dlfm_db.cfg file for DLFM setup # Please note: the equal sign is needed between the option name and the value. # Any of the updateable DB2 database configuration parameters can be included in this file. # There are also three special DLFM-specific options which the user can specify: # DLFM_DB_DIR : directory where the dlfm_db database is to be created # DLFM_DB_CATTBL_DIR : directory where the catalog table space is to be located # DLFM_DB_TMPTBL_DIR : directory where the temporary table space is to be located # If there are no values for these options, or the options are not present, # default values are used. #################### DLFM_DB_DIR= DLFM_DB_CATTBL_DIR= DLFM_DB_TEMPTBL_DIR= LOCKTIMEOUT=60 # Default and recommended value for dlfm_db is 60 LOGFILSIZ=2000 # Default and recommended value for dlfm_db is 2000 LOGPRIMARY=6 # Default and recommended value for dlfm_db is 6 MAXAPPLS=128 # Default and recommended value for dlfm_db is 128 MAXLOCKS=100 # Default and recommended value for dlfm_db is 100
This section describes how to perform recovery after a disk crash on the File Manager Server, and the backup strategies that the administrator must undertake to support such disaster recovery scenarios.
Note: | In the case of a machine crash on a DB2 File Manager, DB2 applications interacting with the DB2 File Manager can get hung. Use the FORCE APPLICATION command to force such applications off the system. In the case of a machine crash on a DB2 server, the affected DB2 File Managers should also be shut down, using the "dlfm shutdown" command. They can then be brought up again. |
If the disk containing user data crashes, all user files, along with the directory hierarchy of the file system, are destroyed. To recover from such a scenario, the administrator must make periodic backups of the file system containing the user data and directory hierarchy, so that it can be restored. The restored file system must preserve directory and file ownerships, and time stamps. After restoring the file system, the directory structure must be brought up to the point-in-time of the crash by applying the directory changes that occurred after the file system backup was taken. After this step, the RECONCILE command must be run on all tables containing files on the damaged disk. A utility, "db2_reconcile_aid" (located in sqllib\misc), is provided to simplify this task, and is described below.
Notes:
Backing up and Restoring the File System
The following instructions are specific to AIX Version 4.1.5, and may not apply to other operating systems. Users must be logged in as root to perform backup or restore operations.
Backup
The backup should be made in Version 3 inode format:
System Storage Management --> File Systems --> Unmount a File System
System Storage Management --> File Systems --> Backup a File System
An incremental backup strategy could be used to reduce recovery time. Level 0 refers to a full backup, and levels 1 through 9 refer to incremental backups. A level n backup backs up only those files that have changed since a level (n-1) backup. After a level n backup, the next backup to be taken will be a level (n+1) backup.
A backup can also be invoked from the command line:
% backup -<level> -uf <backup device> <filesystem> Examples: For a full backup use: % backup -0 -u -f/dev/rmt0 /dlfs1 For the next incremental backup use: % backup -1 -u -f/dev/rmt1 /dlfs1 The incremental backup after this will be: % backup -2 -u -f/dev/rmt2 /dlfs1
System Storage Management --> File Systems --> Mount a File System
Restore
A restore should be performed using one of the backups taken previously:
System Storage Management --> File Systems --> Mount a File System
System Storage Management --> File Systems --> Restore a File System
A restore can also be invoked from the command line. From the directory where the restore is to be done (that is, the mount point for the file system device), issue the following command:
% restore -r -f <backup device>
If an incremental backup strategy is being used, restore the newest level 0 backup, then restore the level 1 backup taken after the level 0 backup. Restore the level n backup taken after the level (n-1) backup. Ensure that the file "restoresymtable" in the root of the file system remains intact between restorations of incremental backups.
Attention: Do not use the "-M" option during restoration. The "-M" option changes the time stamp of the files to the time that the file system was restored.
If the backup or restore operation is being done through ADSM, a full backup by inode needs to be made after a restore operation to overwrite any previous backups.
Dedicated media should be used when making backups.
Restoring Only the Directories
The user can interactively restore files using the "-i" option on the restore command:
% restore -i -f <backup device>
This provides the user with a list of directories or files in the backup, and takes the user into an interactive shell. Specific directories can be restored using the "-h" option.
Bringing the File System Directory Hierarchy to the Current Point in Time
The directory changes are logged in the file (err_log)/fsysadm.log, where (err_log) is the error log directory as specified in the /etc/sylog.conf file. This file is appended to, and will not be truncated. There is one entry for each event. Setting the attributes of a file is also logged. The format of the entry is:
Time = <timestamp> EUID = <integer> UID = <integer> GID = <integer> Mode = <octal> Action = <CREATE/REMOVE/SETATTR> Object type = <DIR/FILE> Path = <fully qualified name> where: Time is the time of the activity in local time. EUID is the effective user ID of the user performing the action. UID and GID are the user ID and group ID attributes of the file/directory that was created, or whose attributes were modified. Mode is the octal representation of the mode of the file/directory. Action can be CREATE (the file/directory was created), REMOVE (the file/directory was removed), or SETATTR (the mode of the file/directory was modified by the user). Object type can be DIR (directory), or FILE (file). Path is the fully qualified path of the file/directory.
Running RECONCILE After Restoring A File System
The db2_reconcile_aid utility provides a mechanism for checking and running RECONCILE on tables that are potentially inconsistent with the DATALINK file data on the file server after a disk failure on the file server. The syntax is as follows:
db2_reconcile_aid -check -db <db_name> [-server_name <fileserver_name>] [-reportdir <report_directory>] where: -check: List the tables that may need reconciliation. No reconcile operation will be performed. db_name: Name of the database for which the reconcile operation needs to be performed. fileserver_name: Name of the DLFM server for which the reconcile operation is to be performed. If no name is provided, all the file servers will be reconciled. report_directory: Directory containing a report for each of the reconcile operations. For each table on which reconcile was performed, files of the following format will be created: <tbcshcema>.<tbname>.<ext>, where <tbschema> is the schema of the table, <tbname> is the table name, and <ext> is .ulk or .exp. The .ulk file contains a list of files that were unlinked on the file server, and the .exp file contains a list of files that were in exception on the file server. For more information, see the description of the RECONCILE command in the Command Reference.
This parameter specifies the number of the most recent DB2 backups (both database and table space) for which a File Manager keeps backup information. Files that are unlinked from the database are garbage collected based on this value. A value of one (1) means that the unlinked files are garbage collected at the completion of the next DB2 backup.
The default value for this parameter is one (1) backup.
This parameter applies to the DATALINK columns which specify "Recovery=Yes".
As with any system, it is possible for a failure to occur that requires a certain level of recovery. DB2's File Manager introduces a new level of complexity to several of the recovery scenarios. This section lists several possible failure scenarios and the steps required to recover from them.
Definitions
DLFS file system - Registered prefix (/dlink) DLFM backup directory - Directory where files are backed up (/home/dlfm/dlfm_backup) DLFM home directory - Home directory of DLFM user ID (/home/dlfm) DLFM DB2 database - DB2 database that contains all meta-data (DLFM_DB) DB2 database - Registered database that contains DATALINK data type (CROWN)
Recommended Things to Do Prior to a Crash
Place the DLFM database (DLFM_DB), the DLFS file systems, the DLFM backup directory, and the DLFM home directory on different file systems. Ensure they do not share disks.
Back up the DLFS file system and the DLFM backup directory on a storage manager, such as IBM's ADSM. This will afford added protection from disk failure, in the event one occurs.
Have one database associated with one or more DB2 File Managers. Avoid having two databases associated with one DB2 File Manager; otherwise, certain recovery scenarios will become more complex than is necessary.
Perform full database backups of the DB2 databases (as well as the DLFM_DB database on the DB2 File Manager) at regular intervals.
What Can Fail?
Following are four possible failure components which require recovery:
DB2 File Manager recovery scenarios will be discussed for each of these components.
The following recovery scenarios may require operations to be performed on both the DB2 node and the DB2 File Manager node. Each step will be preceded by either "DB2:" or "DLFM:" to indicate the node on which the operation must be completed.
The DB2 database will be referred to as "crown" throughout all examples, and the DB2 table containing the DATALINK column will be referred to as "datalinktable".
Scenario #1a:
Failure:
DB2 database is lost or dropped; DB2 backup and log files are available.
Recovery Steps:
NODE OPERATION DB2: db2 "restore database crown" DB2: db2 "rollforward database crown to end of logs" (End of logs is always recommended over point-in-time.) DB2: db2 "reconcile table datalinktable"
Scenario #1b:
Failure:
DB2 database was explicitly dropped; DB2 backup and log files are available.
Recovery Steps: Ensure that the drop database operation is complete, and that all files associated with that database have been unlinked.
NODE OPERATION DLFM: db2 "connect to dlfm_db" DLFM: db2 "select dbid, dbname, dbinst, hostname from dfm_dbid" DLFM: db2 "update dfm_dbid set action=5 where dbid=x'35B3D7BE0006BF7B'" (Set dbid equal to value found in previous step.) DB2: db2 "restore database crown" DB2: db2 "rollforward database crown to end of logs" (End of logs is always recommended over point-in-time.) DB2: db2 "reconcile table datalinktable" (Mandatory)
Scenario #2:
Failure:
DLFM_DB is lost; backup, and all log files for the DLFM_DB database are available.
Recovery Steps:
NODE OPERATION DLFM: db2 "restore database dlfm_db" DLFM: db2 "rollforward database dlfm_db to end of logs"
Scenario #3:
Failure:
DLFM_DB is lost; backup of the DLFM_DB database is available, but not all log files are available.
Recovery Steps:
NODE OPERATION DLFM: db2 "restore database dlfm_db" DB2: db2 "reconcile table datalinktable"
Scenario #4:
Failure:
DLFS file system is lost.
Recovery Steps:
NODE OPERATION DLFM: Restore the DLFS file system from your storage manager. DB2: db2_recon_aid crown
Scenario #5:
Failure:
DLFM backup directory is lost.
Recovery Steps:
NODE OPERATION DLFM: Restore the DLFM backup directory from your storage manager.
Scenario #6:
Failure:
DLFS file system is lost; DLFM backup directory is lost.
Recovery Steps:
NODE OPERATION DLFM: Restore the DLFM backup directory from your storage manager. DLFM: Restore the DLFS file system from your storage manager. DB2: db2_recon_aid crown
Scenario #7:
Failure:
DLFM_DB is lost; backup, and all log files for the DLFM_DB database are available. DLFS file system is lost; DLFM backup directory is lost.
Recovery Steps:
NODE OPERATION DLFM: db2 "restore database dlfm_db" DLFM: db2 "rollforward database dlfm_db to end of logs" DLFM: Restore the DLFM backup directory from your storage manager. DLFM: Restore the DLFS file system from your storage manager. (You need the directory structure.) DB2: db2_recon_aid crown
Scenario #8:
Failure:
DLFM_DB is lost; backup of the DLFM_DB database is available, but not all log files are available. DLFS file system is lost; DLFM backup directory is lost.
Recovery Steps:
NODE OPERATION DLFM: db2 "restore database dlfm_db" DLFM: Restore the DLFM backup directory from your storage manager. DLFM: Restore the DLFS file system from your storage manager. (You need the directory structure.) DB2: db2_recon_aid crown
Scenario #9:
Failure:
DB2 database is lost; DLFM_DB is lost; backup, and all log files for the DLFM_DB database are available. DLFS file system is lost; DLFM backup directory is lost.
Recovery Steps:
NODE OPERATION DLFM: db2 "restore database dlfm_db" DLFM: db2 "rollforward database dlfm_db to end of logs" DLFM: Restore the DLFM backup directory from your storage manager. DLFM: Restore the DLFS file system from your storage manager. DB2: db2 "restore database crown" DB2: db2 "rollforward database crown to end of logs" (End of logs is always recommended over point-in-time.) DB2: db2 "reconcile table datalinktable"
Note: | Rollforward to a point-in-time (PIT) may not put tables that have all DATALINK columns defined as "recovery = no" into DataLink_Reconcile_Pending (DRP) state. For all such tables, insert a row with the DATALINK value after the rollforward to a PIT, issue a SET CONSTRAINTS statement to put the table in DRP, and then run the reconcile utility. |
References to the command:
db2 add file manager <hostname> <portnum>
should be replaced by references to a new utility called db2dlcfg. The syntax of this utility is as follows:
db2dlcfg -d <database name> -a <action> [-s <server host name>] [-p <port number>] -h where: database name Represents the name of the database (mandatory). action Represents the action that is to be performed (mandatory): LIST - List all DB2 File Managers defined for this database. ADD - Add a single DB2 File Manager for access by this database. server host name Represents the DLFM File Manager's fully qualified host name. port number Represents the DLFM File Manager's port number. -h Displays help information. When this option is specified, all other options are ignored and only the help is displayed.
For example:
db2dlcfg -d SAMPLE -a LIST db2dlcfg -d SAMPLE -a ADD -s myhost.mydomain.mycompany.com -p 10001
If moving from a previous version of DB2 File Manager, do the following:
For example:
cat datalink.cfg 1 myfs1.mydomain.mycompany.com 10001 2 myfs2.mydomain.mycompany.com 10001 4 myfs2.mydomain.mycompany.com 10001 (Note that the third one is missing.) mv datalink.cfg datalink.old db2dlcfg -d SAMPLE -a ADD -s myfs1.mydomain.mycompany.com -p 10001 db2dlcfg -d SAMPLE -a ADD -s myfs2.mydomain.mycompany.com -p 10001 db2dlcfg -d SAMPLE -a ADD -s nullfs.mydomain.mycompany.com -p 10001 db2dlcfg -d SAMPLE -a ADD -s myfs4.mydomain.mycompany.com -p 10001 (Note that the third one is null.)
The additional information in this section relates to the following DB2 Version 5.0 Quick Beginnings manuals:
Reference is made in the DB2 manuals to the DB2 Application Developer's Kit. This product is now available in two editions, known as the IBM DB2 Personal Developer's Edition Version 5, and the IBM DB2 Universal Developer's Edition Version 5 products. Both editions are available on OS/2, Windows NT, Windows 95, and Windows 98 only.
The updates in this section relate to software prerequisites and fixes.
In the "Server Product Requirements" section, the following operating system information should be changed for the DB2UDB Enterprise and Workgroup Edition:
From: To: -------------------- --------------------------------------------- OS/2 Warp Server SMP OS/2 Warp Server Advanced V4 with SMP Feature
A similar change must be made in the "Client Product Requirements" section for the DB2 Client Application Enabler for OS/2, the SDK and Net.Data:
From: To: -------------------- --------------------------------------------- OS/2 Warp Server SMP OS/2 Warp Server Advanced V4 with SMP Feature
Using IPX/SPX to communicate between Windows 95, Windows 98, or OS/2 clients and servers is not recommended. This connection is unreliable. Working with Novell, we have determined the cause to be at the IPX/SPX protocol stack level. The problem lies with the level of SPX I supported by Windows 95 and Windows 98, and the level of SPX I supported by OS/2. When Windows 95 or Windows 98 negotiates a connection, it specifies an SPX I level connection, but on Windows 95 or Windows 98, this includes SPX II level full-media size support. However, when OS/2 negotiates an SPX I connection, full-media size is not supported.
Novell is currently developing full-media size support for OS/2 SPX I connections, for a consistent representation of SPX I connections. This will prevent connection terminations from occurring when an unexpected packet size is sent. If you would like the status of this development, you can call Novell and ask for the status of SPD147611.
The"Netfinity Support Program" service will affect the installation of DB2 UDB EEE for Windows NT when upgrading DB2 UDB.
In order to upgrade DB2 UDB EE to DB2 UDB EEE, the "Netfinity Support Program" service should be stopped before the installation of DB2 UDB EEE is begun.
When using DB2 Connect Enterprise Edition for Windows NT and IBM Communications Server for Windows NT (CS/NT), in order to enable the canceling of host queries (SQL CANCEL), you will require DB2 UDB Version 5.2 (or DB2 UDB Extended Enterprise Edition for Windows NT), and the APAR fixes JR11529 and JR11170 for CS/NT.
When using a DB2 UDB V5.0 Client Application Enabler, problems can arise when using the Control Center to attach to an Extended Enterprise Edition (EEE) partitioned database instance.
The Control Center originally provided with DB2 Connect and UDB Version 5 did not include the administrative functions required to administer partitioned database systems.
As a result, it is necessary to upgrade DB2 clients in order to perform UDB EEE administrative functions. Possible indications that this is necessary include the following:
If you have multiple logical nodes per single physical node, then the backup and restore commands are not supported from DB2 Version 2 clients.
In general, if you want to use the Control Center to administer a DB2 UDB EEE server, then you should upgrade the Client Application Enabler. The fix for this problem was first supplied in the first FixPak for DB2 UDB V5.0:
AIX U452196 Solaris U452197 HP-UX U452198 OS/2 WR09003 Windows NT WR09004 Windows 3.1 WR09006
These fixes are obtainable from: http://www.software.ibm.com/data/db2/db2tech. The correct client level can also be obtained from the client pack that is included with DB2 UDB and DB2 Connect Enterprise Edition Version 5.2.
DB2 UDB FixPaks are cumulative. They include fixes in FixPaks 1, 2, 3, 4, 5, and 6, as well as in the Client Application Enabler included with DB2 UDB EEE for Windows NT, and for Solaris, Version 5.
When using ODBC applications from Windows NT and Windows 95 clients, we recommend that you upgrade the DB2 Client Application Enabler to fix pack level WR09014 or later.
This fix is obtainable from: http://www.software.ibm.com/data/db2/db2tech. The correct client level can also be obtained from the client pack that is included with DB2 UDB and DB2 Connect Enterprise Edition Version 5.2.
In an AIX partitioned database environment, after an online table space backup has been taken using ADSM, DB2 may fail if the database is restored.
The symptom is that the online restore fails, and the DB2 instance is brought down (traps) with the diagnostic message SIG11sqloEDUCodeTrap. The solution to this problem is to install the ADSM Client fix pack 2.1.0.6 or higher.
Two-phase commit using SNA is now supported on Windows NT when using the DB2 Syncpoint Manager (SPM) function together with IBM Communications Server V5.01 for Windows NT (CS/NT).
The SNA-based version of the SPM is part of the following products:
The SPM permits DB2 UDB client applications to access host data through DB2 Connect EE with SNA two-phase commit data protection provided by the SPM. Similarly, host database applications can access data on DB2 UDB servers.
After installing IBM Communications Server for Windows NT (CS/NT) you should register it as a Windows NT Service. This will automatically start CS/NT when the machine is booted.
To register CS/NT as an Windows NT Service execute the following command. Either:
csstart -a
To autostart CS/NT with the default configuration file, or:
csstart -a c:\ibmcs\private\your.acg
Where c:\ibmcs\private\your.acg is the name of the non-default CS/NT configuration file you wish to be used.
The next time your machine is rebooted CS/NT will be started automatically with the requested configuration file.
DB2 Universal Database and DB2 Connect Version 5.2 provide support for:
Two new DLLs have been provided:
By default, db2dce.grd has been copied to db2dce.dll.
Note: | DB2 will only load db2dce.dll. You must copy dcedce.ibm to db2dce.dll if you want to use IBM DCE CDS. |
In addition to the existing support using APPC, IPX/SPX, NETBIOS, and TCP/IP, support for DCE using the Named Pipes (NPIPE) networking protocol is now provided. A control statement similar to the following example may now also be specified for a DCE object having the DB_Communication_Protocol attribute, with value:
NPIPE;Computer_name;Instance_name
For more information, see Appendix G of the Administration Guide.
For logging to raw devices with DB2 on Windows NT, Microsoft Service Pack 3 for Windows NT 4.0 is required.
A problem where Windows NT committed memory would rise to equal real memory has been fixed in Microsoft Service Pack 4.
In the Quick Beginnings manuals for UNIX, when discussing security issues, it is assumed that the user has not configured the SYSADM_GROUP parameter in the database manager configuration file. If the SYSADM_GROUP parameter is not configured, then the SYSADM group defaults to the primary group of the instance owner.
The Quick Beginnings manuals always refer to the primary group of the instance owner as having SYSADM authority. However, if you have configured the SYSADM_GROUP parameter, you should take the references in the manuals to refer to the SYSADM group that you have defined in the database manager configuration file.
The updates in this section relate to installation and configuration tasks.
IBM Communications Server for Windows NT Version 5.01 is required if using the SPM over SNA. The SPM is also supported over TCP/IP in DB2 Version 5.2. Chapter 22 of "DB2 Connect Enterprise Edition Quick Beginnings" book describes SPM setup.
The following additional instructions assume that you have already configured CS/NT for DB2 Connect. Refer to Chapter 14 of the DB2 Connect Enterprise Edition Quick Beginnings book.
LOCAL_LU=(LU_NAME=SPMNAME LU_ALIAS=SPMNAME LU_SESSION_LIMIT=0 NAU_ADDRESS=0 ROUTE_TO_CLIENT=0 SYNCPOINT_SUPPORT=0)
To verify that syncpoint support has been enabled for this LU check the "Local LU6.2" information on the CS/NT SNA Node Operations. The LU should have "Yes" listed in the "Syncpoint Support" column.
update dbm cfg using diaglevel 4
You need to set this in order to be able to view possible SPM-related diagnostic messages.
Note also that when the SPM is started for the first time a directory \sqllib\spmlog is created. If this directory does not exist, or if the diagnostics indicate that the SPM has failed to start, then you must recheck your DB2 and CS/NT configurations.
The Quick Beginnings manual for DB2 Connect Enterprise Edition will be updated to include this additional information.
The Quick Beginnings manual for DB2 Connect Enterprise Edition will be updated to include this new information.
Note: | This section is updated from the Version 5.2 Release Notes with the inclusion of the AIX Version 4.1.4 restriction. The AIX Version 4.1.5 restriction was previously documented. |
On AIX Versions 4.1.4 and 4.1.5, when a user is running Network Information System (NIS) for password authentication, the message DB21015E may be received when the command line processor times out.
The command line processor back-end program cannot recognize the user ID of the DB2 instance from the NIS master server in this case. To correct this problem:
:!::::::
Extended SNA Security Codes were not implemented by some SNA subsystem providers. This may affect customers using one or more of the following:
A new DLL, called db2cpic2.dll has been provided in order to support
Extended SNA Security Codes in these environments. The new DLL was
originally packaged with FixPak 1 for DB2 Universal Database Version 5.
It is identical to the original DLL db2cpic.dll, except that it
includes a call to extract secondary information (cmesi()).
Table 4. Which version of db2cpic.dll to use?
If you have this SNA subsystem | On Windows NT use | On Windows 95 use |
---|---|---|
IBM Communications Server for Windows NT | db2cpic2.dll (new) | db2cpic.dll (old) |
IBM Personal Communications for Windows (see note below) | db2cpic2.dll | db2cpic.dll |
Microsoft SNA Server | db2cpic.dll | db2cpic.dll |
Wall Data RUMBA (also Integrated SNA Support provided with DB2 Connect Personal Edition) | db2cpic2.dll | db2cpic2.dll |
If DB2 fails to load db2cpic2.dll, you can do the following in the directory where it has been installed:
copy db2cpic2.dll db2cpic2.bak copy db2cpic.dll db2cpic2.dll
This will make db2cpic2.dll the same as db2cpic.dll.
Note: | If you wish to use Extended Security Support with DB2 UDB Version 5.0 on Windows 95, we recommend using IBM Personal Communications (PComm) for Windows 95 Version 4.2 (not Version 4.1). Copy db2cpic.dll to db2cpic.bak and copy db2cpic2.dll to db2cpic.dll. This will make db2cpic.dll the same as db2cpic2.dll. |
In "Controlling Your DB2 Environment" in the Quick Beginnings manual for DB2 UDB Extended Enterprise Edition for Windows NT, it is incorrectly stated that "DB2NTMEMSIZE is used to override the upper limit of the DB2 shared memory segment".
Windows NT requires that all shared memory segments be reserved at DLL initialization time in order to guarantee matching addresses across processes, and a new profile variable DB2NTMEMSIZE has been introduced to permit the user to override the DB2 defaults on Windows NT if necessary. In most situations the default values should suffice.
The following is the list of memory segments and their default sizes:
Memory Segment Default Size Override Option ---------------------------------------------------------------------------- Database Kernel 16777216 (16MB) DBMS:[number of bytes] MPP FCM Buffers 33554432 (32MB) FCM:[number of bytes] Database Admin GUI 33554432 (32MB) DBAT:[number of bytes] Fenced DARIs 16777216 (16MB) APLD:[number of bytes]
More than one segment may be overridden by separating the override options with ':'.
For example, to limit DBMS to (approximately) 256K, and FCM to (approximately) 64MB, use:
db2set DB2NTMEMSIZE=DBMS:256000:FCM:64000000
On OS/2, when you resize the Performance Graph View of the Snapshot Monitor to a smaller size, the Control Center may shut down without any error message.
You cannot perform an action that either alters, changes, or deletes a database object (such as an instance or database) while you are performing snapshot monitoring on either that object, or on any it its child objects. (In addition, if you are monitoring a partitioned database system, you cannot refresh a view of partitioned database objects.) For example, you cannot monitor database A if you want to remove its instance. If, however, you are monitoring the instance only, you can alter database A.
To stop all monitoring for an instance (including any of its child objects), select "Stop all monitoring" from the pop-up menu for the instance. You should always stop monitoring from the instance, as this ensures that all locks that are held by the performance monitor are released.
Information on using Java with DB2 (JDBC for instance) is located on the internet at:
http://www.software.ibm.com/data/db2/java/
The following information should be added to the tip that is at the end of Step 1, "Prepare for Installation," in Chapter 6, "Installing DB2 Universal Database Extended Enterprise Edition," of the "IBM DB2 Universal Database Extended Enterprise Edition for AIX Quick Beginnings" book. The tip describes how to ensure that network-related parameters are set on all SP nodes.
After the note that specifies "You can use the no -a command to view the current setting for network parameters," add the following paragraph:
The above values are minimum values for the parameters. If you are already setting any of the network-related parameters to a larger value, do not set the parameter to the lower value listed above.
After the sentence that begins "To view the configuration for css0," add the following paragraph:
You should only use the following method to set the network-related parameters if you are not using the /tftpboot/tuning/cst file to tune the system.
The following information should be added to Step 8, "Setting up the Client Configuration Assistant and the Control Center" in Chapter 6, "Installing DB2 Universal Database Extended Enterprise Edition," of the "IBM DB2 Universal Database Extended Enterprise Edition for AIX Quick Beginnings" manual.
For performance reasons, it is not recommended that you set discover=SEARCH on the administration server that resides on a key catalog node or Control Center node.
There are additional ways to distribute client requests amongst different coordinator nodes. You can use the Client Configuration Assistant IMPORT/EXPORT or manual catalog functions to distribute the workload. Another alternative is to use the LoadLeveler product to distribute client requests transparently.
You cannot use the Server Communications Setup function in the Control Center to perform remote setup of nodes (database partition servers) in the partitioned database environment, if there is no administration server running on the node.
Discovery is supported using the TCP/IP protocol.
IPX/SPX file server addressing is not supported in the partitioned database environment.
In the "Verify that Databases Can Be Migrated" section, it is stated that DB2 must be stopped to prepare the instance for migration. This is incorrect. In order to successfully run the "db2ckmig" tool, the database manager must be running. In the same section, change the following:
DB2DIR/instance/db2ckmig -e -a 0 -1 INSTHOME/migration.log
to:
DB2DIR/bin/db2ckmig -e -a 0 -l INSTHOME/migration.log
In the "Migrate the DB2 Instance" section, the "db2imigr" command syntax is shown with "-u fencedID" as an optional parameter. The -u option is not required only when the DB2 Client Application Enabler product is installed; however, for all other DB2 products, the -u option is required.
Simple Network Management Protocol (SNMP) products include IBM NetView for AIX and IBM NetView for OS/2. DB2 provides support for SNMP management products via the DB2 SNMP Subagent, included as a component of the DB2 Server. SNMP management products such as NetView allow centralized management of both hardware and software components of a system. The DB2 SNMP Subagent allows DB2 servers to be managed just like any other SNMP-managed resource.
The software requirements for using the SNMP Subagent are as follows:
In order to connect properly using the IBM Communications Server SNA API Client for Windows NT or Windows 95, you must start the APING application prior to starting any GUI application that will connect via APPC. Such GUI applications include the Client Configuration Assistant, Control Center, and any ODBC GUI applications.
There is a change to a note in Table 7 within the section "User Profile Management Passwords" that refers to translated values for user IDs and passwords. The note to Table 7, "Default User IDs and Passwords" states the following:
If your country does not appear in the list, assume USERID and PASSWORD are used. If this fails, see the README file.
This note is no longer valid and should be disregarded.
When setting up ADSM Client for DB2 UDB Extended Enterprise Edition for Solaris, the DSMI_DIR, DSMI_CONFIG, and DSMI_LOG ADSM environment variables should be set up in db2profile. If these variables are merely exported using .profile or .kshrc, they will not be available to each node in the partitioned database environment. The ADSM client on Solaris must be at Version 2.1.0.8 or higher.
The following information should be added to the chapter that describes how to use the DB2 workstation as a server for host applications, in the section that lists the PTFs required for DRDA application servers:
Please apply the fix for DB2 for OS/390 APAR PQ07537 if you will be connecting from DB2 for OS/390 Version 5 to DB2 Universal Database Version 5.
NetBIOS on Windows NT does not properly receive data that is larger than the receive buffer. However, you can use the DB2 registry value db2nbrecvbuffsize to overcome this problem.
You need to set the receive buffer on the server to be larger than the largest amount of data that will be sent by a client. The default RQRIOBLK size is 32767 bytes, but the default server "receive buffer size" is 4096 bytes. Before you start DB2, issue the following command:
db2set db2nbrecvbuffsize=32768 -g
This command sets the Server's receive buffer size globally.
For Windows NT and Windows 95, in the "Managing Instances" section of the Quick Beginnings manual, the following revised information applies.
The default database manager shared memory address is x'20000000' for Windows NT, and x'90000000' for Windows 95. If a shared memory address collision occurs, this value can be modified to force the database manager instance to allocate its shared memory at a different address. Use the DB2DBMSADDR registry profile variable to assign an address value in the range of x'20000000' to x'B0000000' on Windows NT, and x'80000000' to x'B00000000' on Windows 95 using increments of x'4000000'. For example, on Windows 95, try the values: x'840000000', x'88000000', x'8C000000', and so on...
The value of DB2DBMSADDR is in hexadecimal format without the preceding 0x and may be set using the db2set profile registry command.
For example, to set DB2DBMSADDR to x'84000000', use the following command:
db2set DB2DBMSADDR=84000000
Under the heading "Running Java Applications" - "Java Applets", the Quick Beginnings and Administration Getting Started manuals say:
To run your applets, do the following: 1. Start the DB2 JDBC applet server on your Web server by entering: db2jstrt portno where portno is the number of the unused TCP/IP port that you specified in the DB2Applt.java file.
This information is out of date, as follows:
The DB2 JDBC Applet Server can now be started as a Windows NT service, and it must be registered as a service first, before it is be available in the services section of the control panel.
In order to register the service, run the following command while logged on as an administrator:
db2regjdbc i
To register the service to run with a specific userid, run:
db2regjdbc i [userid] [password]
To remove the service, run:
db2regjdbc u
Once the service is registered, it can be started and stopped from the services section of the control panel:
The service can also be set to start and stop automatically.
The topic "SQL Help", which explains the syntax of SQL statements, is not available on UNIX based operating systems.
The list of language identifiers in the first note at the bottom of the table containing information about the DB2 Books should include the following entries:
Language Identifier ________ __________ Greek A Japanese J Korean K Simplified Chinese C Traditional Chinese T
If you prefer to have printed copies of the manuals, you can decompress and print PostScript versions.
Note: | Specify the full path name for the file you intend to print. |
On OS/2 and Windows platforms:
print filename.psz
On UNIX-based platforms:
zcat filename | qprt -P PSPrinter_queue
zcat filename | lp -d PSPrinter_queue
zcat < filename | lp -d PSPrinter_queue
where filename is the name of the full path name and extension of the compressed PostScript file and PSprinter_queue is the name of the PostScript printer queue.
For example, to print the English version of Quick Beginnings for UNIX on AIX, you can use the following command:
zcat /cdrom/doc/en/ps/db2ixe50.ps.Z | qprt -P ps1
The manuals included with this product are in Hypertext Markup Language (HTML) softcopy format. Softcopy format enables you to search or browse the information, and provides hypertext links to related information. It also makes it easier to share the library across your site.
You can use any HTML Version 3.2-compliant browser to view the online books.
To view online books:
file:/INSTHOME/sqllib/doc/%L/html/index.htm
where %L is the locale name.
sqllib\doc\html\index.htm
The path is located on the drive where DB2 is installed.
You can also open the page by double-clicking on the DB2 Online Books icon. Depending on the system you are using, the icon is in the main product folder or the Windows Start menu.
Note: | The DB2 Online Books icon is only available if you do not install the Information Center. |
By default the DB2 information is installed on your local system. This means that each person who needs access to the DB2 information must install the same files. To have the DB2 information stored in a single location, use the following instructions:
http://www.software.ibm.com/data/pubs/papers/db2html.html
For information about setting up a search, see the What's New book.
To search for information in the HTML books, you can do the following:
Currently, the default settings on Internet Explorer cause a conflict with the DB2 search server. As a workaround, disable the HTTP 1.1 settings check box from Internet Explorer's advanced options (on each client that will access the documentation). Note that due to a limitation in Internet Explorer this requires a reboot of your system to take effect.
After you install DB2 UDB EEE for Windows NT on multiple nodes, the install process will only configure the "instance owning node" to be the coordinator node for communications from client machines. In order to remotely administer the cluster using the Control Center you will need to enable communications on each of the nodes. One way to do this is as follows:
DB2 update dbm cfg using svcename 4000
rah "db2set db2comm=TCPIP"
Each node will accept inbound TCP/IP connections when it is restarted.
IBM Replication is a set of easy-to-use, automated copy tools that replicate data from sources to targets. Data can be copied:
Two messages for the Apply program are missing from Chapter 21, "IBM Replication Messages":
ASN1054S The Apply program could not find the registration information for source owner "<src_ownr>", source table "<src_tbl>", and source view qualifier "<src_view_qual>". Explanation The source table registration is incorrect or incomplete. User Response Drop the registration and redo it. Also make sure that the registration information is in both the register table and the pruning control table. ASN1055S The Apply program could not find the prune control information for source owner "<src_ownr>", source table "<src_tbl>", source view qualifier "<src_view_qual>", target owner "<tgt_ownr>", and target table "<tgt_tbl>". Explanation The source table registration is incorrect. User Response Drop the subscription and redo it.
The capture program for DB2 UDB Version 5.2 now has the capability to support LONG VARCHAR data types. New Long Field Manager long records, which contain the information needed to capture changes made to the LONG VARCHAR columns, are propagated.
For DB2 UDB Version 5 users who need the new capture capability, there is the option to add an additional clause to the ALTER TABLE statement. There are no changes to the CREATE TABLE syntax.
To take advantage of the new LONG VARCHAR support, you must specify the DATA CAPTURE CHANGES INCLUDE LONGVAR COLUMNS clause on the ALTER TABLE statement. When this DDL statement is executed, the DATA CAPTURE CHANGES column in the SYSCAT.TABLES is set to "L". Whenever base tables with the INCLUDE LONGVAR option row are updated, Long Field Manager long records are written and read by the capture program. These records include both before and after images of columns.
If you want to disable the new LONG VARCHAR support, you must specify the appropriate DATA CAPTURE CHANGES value in an ALTER TABLE statement. For example, you could specify "YES" or "NONE" without the additional clause.
To register LONG VARCHAR columns, do the following from the Control Center:
The following restriction applies: For a DELETE operation (IBMSNAP_OPERATION="D"), the LONG VARCHAR column values will be null.
Welcome to the DB2 DataJoiner Replication Administration Tool, Version 2.1.1 (DJRA).
This section contains information you need for installing DJRA:
ABOUT DJRA
DJRA is available for the first time as a sample application in DB2 UDB FixPak 7 for Windows 95, Windows 98 and Windows NT. It is only available for these operating systems. DJRA provides administration functions for replicating data. DJRA is recommended for customers requiring administrative functions for DataPropagator Relational for MVS Version 5 and DataPropagator Relational Version 5 for AS/400. Several of the administration functions needed for these two platforms are not currently available in the UDB Control Center.
INSTALLING DJRA
DJRA is located in the directory \BONUS\REPLICATION. Conceptual, installation and configuration information is available in the "DataJoiner Planning, installation and Configuration Guide" at:
http://www.software.ibm.com/data/datajoiner/booksv2/djxn2m79
The publication can also be ordered separately (SC26-915-01). To print a postscript copy, download the file v2ntpic.pdf from the DataJoiner ftp site:
ftp://ftp.software.ibm.com/ps/products/datajoiner/info/
Replication Administration must be installed on a Windows 95, Windows 98, or Windows NT operating system. To install Replication Administration Version 2.1.1:
During installation setup, you can indicate the directory to install DJRA. You can also browse through the README.TXT file to see the latest information about the application.
VisualAge for Basic and Lotus Approach are no longer bundled in the product boxes.
The ACTIVATE NOT LOGGED INITIALLY clause should not be used when DATALINK columns with the FILE LINK CONTROL attribute are being added to the table.
Replace the description of table-name with the following:
table-name
Identifies the table to be changed. It must be a table described in the catalog and must not be a view or catalog table. If it is a summary table, alterations are limited to activating not logged initially, changing pctfree, locksize or append.
The following should be added to the "Rules" section:
DB2-Connect does not support SELECT statements selecting LOB columns in a compound SQL block.
Under both CREATE FUNCTION (External Scalar) and CREATE FUNCTION (External Table), there is a list of the information contained in the DBINFO structure which is passed to the UDF if the definer specifies DBINFO. The following bullet has been added to each of these lists:
Application ID - unique application ID which is established for each connection to the database.
In the "CREATE FUNCTION (External Scalar)" section, where the "NO FINAL CALL or FINAL CALL" specification is discussed, this sentence should be added to the bottom:
A description of the scalar UDF processing of these calls when errors occur is included in the Embedded SQL Programming Guide, Chapter 7, "Other Coding Considerations" section, under "Hints and Tips".
This updated description is in the section 8.11.2 , "External UDF Error Processing Model for Scalar Functions".
In the "CREATE FUNCTION (External Table)" section, where the "NO FINAL CALL or FINAL CALL" specification is discussed, this sentence should be added to the bottom:
A description of the table UDF processing of these calls when errors occur is included in the Embedded SQL Programming Guide, Chapter 7, "Table Function Considerations" section.
This updated description is in the section 8.11.1 , "Scratchpad and Final Call Type Arguments for EXTERNAL TABLE Functions".
Add the following to the description of NOT LOGGED INITIALLY:
The NOT LOGGED INITIALLY clause should not be used when DATALINK columns with the FILE LINK CONTROL attribute are present in the table.
Replace the description of "FULLSELECT" under "summary-table-definition" with the following:
FULLSELECT
Defines the query in which the table is based. The summary-table-options specified define attributes of the summary table. The option chosen also defines the contents of the fullselect as follows:
When DEFINITION ONLY is specified, any valid fullselect can be specified. The data type and nullability attribute are determined from the result columns of the select list. Every select list element must have a name.
When REFRESH DEFERRED or REFRESH IMMEDIATE is specified, the fullselect cannot include:
Furthermore, when REFRESH IMMEDIATE is specified:
Replace the description of "REPLICATED" with the following:
REPLICATED
Specifies that the data stored in the table is physically replicated on each database partition of the nodegroup of the table space in which the table is defined. This means that a copy of all the data in the table exists on each of these database partitions. This option can only be specified for a summary table defined with REFRESH DEFERRED (SQLSTATE 42997).
Replace the seventh major bullet under "Notes" with the following:
If the query specified when creating a REFRESH DEFERRED summary table does not conform to these rules, a warning is returned (SQLSTATE 01633).
Add the following new bullet under "Notes":
The functionality described in the sub-section "DROPPED TABLE RECOVERY" has not been implemented. This sub-section should be deleted from Version 5.2 of the SQL Reference.
The limit for the number of tables in a DMS table space has been increased from 6648 to 51971. This new limit will automatically apply to any table spaces with a page size other than 4KB, and to all newly created 4KB DMS table spaces.
For existing 4KB DMS table spaces, the new /ets option of the db2dart utility is used to apply the new limit:
Notes:
The syntax diagram for the CREATE EVENT MONITOR statement shows that by default, event monitors are created with global scope. This is incorrect. Event monitors are created with local scope by default.
The note at the end of this section can be ignored. The REFRESH IMMEDIATE option is now available.
To build and run embedded SQL for Java (SQLJ) programs now requires the Java Development Kit (JDK) Version 1.1.4 (or higher).
There have been several changes to the Java sample programs documented in this appendix. The stored procedure and UDF programs have been split into separate client and server programs. This allows users on client machines to remotely access stored procedures and UDFs on server machines. The programs affected are now as follows:
The script file embprep on UNIX platforms, embprep.bat on Windows platforms, and embprep.cmd on OS/2, has been created to precompile and bind the SQLJ programs. It is called by the makefile and executes the db2profc command on the Java embedded SQL sample programs. It allows optional arguments for database, user ID and password, thereby facilitating remote binding to databases on DB2 servers from client machines.
Following are additional limitations on SQLJ support for DB2 UDB Version 5.2:
profile name default package name --------------------- -------------------- App_SJProfile1 App_SJP1 App_SJProfile123 App_S123 App_SJProfile1234567 A1234567 App_SJProfile12345678 A2345678
Some browsers do not yet have support for loading a serialized object from a resource file associated with the applet. You will get the following error message when trying to load the applet Applt in those browsers:
java.lang.ClassNotFoundException: Applt_SJProfile0
As a work-around, there is a utility which converts a serialized profile into a profile stored in Java class format. The utility is a Java class called sqlj.runtime.profile.util.SerProfileToClass. It takes a serialized profile resource file as input and produces a Java class containing the profile as output. Your profile can be converted using the following command:
profconv Applt_SJProfile0.ser or java sqlj.runtime.profile.util.SerProfileToClass Applt_SJProfile0.ser
The class Applt_SJProfile0.class is created as a result. Replace all profiles in .ser format used by the applet with profiles in .class format.
For an SQLJ applet, you need both db2java.zip and runtime.zip files. If you choose not to package all your applet classes, classes in db2java.zip and runtime.zip into a single Jar file, put both db2java.zip and runtime.zip (separated by a comma) into the archive parameter in the "applet" tag. For those browsers that do not support multiple zip files in the archive tag, specify db2java.zip in the archive tag, and unzip runtime.zip with your applet classes in a working directory that is accessible to your web browser.
The instructions for enabling remote documentation searches on UNIX operating systems are incorrect. Correct instructions can be found on the web at:
http://www.software.ibm.com/data/pubs/papers/db2html.html
The published example of the "nqmap" command for OS/2 and the Windows operating system should be changed to:
nqmap -u DB2ADMEN "http://yourserver/doc/html/" DB2ADMEN
The instructions for editing your Start HTML Search Server icon's properties on the OS/2 operating system should read:
1. Right-click on the Start HTML Search Server icon in your DB2 for OS/2 folder. Select Settings and click on the Program tab. In the Optional Parameters field, add CONFIG DB2WEBCC.CNF to the end of the string of parameters. Click OK to close. 2. Edit your db2init.cmd file. This file is located in x:\sqllib\bin\, where x: is the drive on which you installed DB2. Find the line in this file containing both "DB2SS.EXE" and "DB2NETQD.EXE". At the end of this line add CONFIG "DB2WEBCC.CNF". (Note that you need double quotation marks around the file name). Save and exit. Continue with the remainder of the instructions for enabling remote documentation searches.
If you want to search all the DB2 online information, you must install both the DB2 documentation (referred to as the "Product Library" on UNIX systems) and the Web Control Center with its online help. If you install the Web Control Center and its help, but do not install the rest of the DB2 online documentation, then attempting to select the search categories for administration, programming, and DB2 Connect may produce an error code of 24, indicating that those search indexes are not installed. In the same way, if you install the DB2 online documentation but not the Web Control Center, attempting to search the Web Control Center online help may produce this same error code.
The following are restrictions of the Control Center:
On page 45 of the What's New book, there is an incorrect statement in the Web Control section. Please disregard the Java Development Kit level mentioned in the following sentence:
"It runs from any Java-enabled Web browser that supports the Java Development Kit 1.1.5."
For the most recent list of the web browsers supported and the Java Development Kit level required, please see the following web page:
http://www.software.ibm.com/data/db2/udb/webcc/
IBM DB2 Universal Database (DB2 UDB) online documentation comes with an HTML search server to help you find information. These release notes describe the setup and use of the search system on OS/2.
You must have TCP/IP Version 3 or higher installed on your machine. OS/2 Warp V4 is recommended because it has the appropriate level of TCP/IP. OS/2 Warp V3 Internet Access Kit (IAK) will also work if you set it up for local loopback and if you install the latest FixPak.
You need a browser such as Netscape 2.02 for OS/2. If a Netscape browser is not available for your language, use Web Explorer 1.1 or higher. Ensure you turn off proxy handling for localhost in the browser you use.
If you are installing DB2 UDB on a system that has VisualAge for C++ for OS/2, you need to have CSD6 or above installed for the VisualAge for C++ product.
The search server will function with or without a network adapter installed as long as TCP/IP local loopback and localhost are enabled on your system.
To enable local loopback:
To enable localhost on your system:
ifconfig lo 127.0.0.1
Note: | If you have OS/2 Warp with Internet Access Kit, add the line to the \STARTUP.CMD instead of the MPTN\BIN\SETUP.CMD file. If the file doesn't exist, you will have to create it. |
Note: | If you have a hostname for your machine on the "Configure LAN Name Resolution Services" page, you must add this name as an alias when you set the IP Address 127.0.0.1 to localhost. |
Note: | This step tells OS/2 that when it is looking for a host, such as localhost, it should use the host address found on your machine rather than checking the nameserver. If the host is not defined on your machine, OS/2 continues looking for the host by using the nameserver you configured. |
Click the "Start HTML Search Server" icon, which is located in the DB2 for OS/2 folder. If an error message appears, TCP/IP is not configured properly. Follow the instructions in 19.1.2 , Configuring TCP/IP on OS/2 to make sure that TCP/IP is correctly configured. Reboot OS/2 if you change any settings.
The search system is stored in its own directory because it may be used by other products. For example, if you installed DB2 UDB along with the search system on drive G, and later you installed VisualAge for Java on drive H, only one search system is installed -- the one that came first.
Some instructions in this document require you to specify the location of this directory. To locate this directory enter this command on an OS/2 command line:
echo %IMNNLPSSRV%
The search server is assigned to port 49213, a number beyond the public ports assigned for TCP/IP. If you have another product that uses this port, you can change the search server port with the following steps:
<form action="http://localhost:49213/cgi-bin/db2sr*xx.exe" method="POST"> where: * is D for DBCS and S for SBCS xx is a two-character identifier for the language in which the documentation is written 49213 is the new port for localhost
If you received error messages while installing the search server, enter the following command from an OS/2 command line and follow the instructions that it returns:
SNIFFLE /P
Make sure you reboot your system after you're done.
After you install the search system, it will start automatically after you reboot.
If the search server starts without any errors but you encounter the following error message in your browser, click the "Stop HTML Search Server" icon, then click the "Start HTML Search Server" icon.
A network error occurred: unable to connect to server. The server may be down or unreachable. Try connecting again later.
After you have finished searching, stop the search server to reclaim the memory it used. To stop it, double-click the "Stop HTML Search Server" icon in the DB2 for OS/2 folder.
If you are not on a network (for example, if you are using a laptop computer and are temporarily away from a LAN connection), you have to enable localhost to search the documentation. The instructions for doing this task are described in step 3 of 19.1.2 , Configuring TCP/IP on OS/2.
The search system searches predefined indexes. All of these indexes are installed with the search system, regardless of the DB2 UDB products you install. Therefore, when you conduct a search, some of the results might return a "File not found" error because they pertain to a product that is not installed on your system.
If you installed the search server but you get an error when you try to search the documentation, check the following items:
NQMAP -A
The documentation for DB2 UDB is called DB2ADMxx, DB2APDxx, or DB2CONxx, where xx is a two-character identifier of the language in which the documents are written. One or more of these names should appear in the list of names that NQMAP returns.
If the files are not there, either TCP/IP is incorrectly installed or the indexes are corrupt. Use SNIFFLE /P to fix the former, and reinstall the documentation to fix the latter.
If you encounter this error:
HS0410: An error was detected when starting the search service. Stop the service and start it again. Press Enter to continue... (Sometimes the error is EHS0411 but the text is the same. This message comes up on a full black screen.)
your hostname might have changed since doing the install and you have to record it again. Go into the NetQuestion directory and enter:
type netq.cfg
to see what hostname NetQuestion is using. If you have changed the hostname, for example while configuring DHCP and DDNS, enter:
netqinit <data-path> Example: netqinit d:\netqos2\data
After you uninstall DB2 for OS/2, the search system is left on your hard drive. Run UNINSTNQ.CMD to remove it.
If you ran UNINSTNQ.CMD and the search system does not uninstall, the cause may be:
NQMAP -A
If this command returns no indexes, skip to step 6.
If the list contains indexes that do not belong to DB2 UDB (that is, their names begin with something other than "DB2"), you cannot remove the search server.
If it contains any of the DB2 index file names (DB2ADMxx, DB2APDxx, DB2CONxx), DB2 UDB could not unregister the indexes therefore causing the search system uninstall to fail. This occurs if DB2 UDB was incorrectly uninstalled (for example, the SQLLIB folder was deleted). In this case, you have to manually unregister the indexes and remove the search system directory using the following steps:
NETQ START SERVER //for SBCS IMQSS.EXE -START DBCSHELP //for DBCS
NQMAP -D <index_name>
NQDELET <index_name> //for SBCS TMDELET <index_name> //for DBCS
NETQ STOP SERVER //for SBCS IMQSS.EXE -STOP DBCSHELP //for DBCS
NQCOUNTI <search system directory> //for SBCS TMCOUNTI <search system directory> //for DBCS
Note: | See 19.1.4 , Locating the Search System Directory if you do not know where it is installed. |
Note: | Install creates numbered backups, of the form config.xyz, where xyz is the first available number from 000 to 100. The backups will have the time and date stamp of the time of the install. There were two backups created during install: one for the DB2 UDB install, one for the search system install. |
IBM DB2 Universal Database (DB2 UDB) online documentation comes with an HTML search system to help you find information. These release notes describe the setup and use of the search system on Windows NT, Windows 95, and Windows 98.
The search system consists of a search engine and a search server. The search system is stored in its own directory because it may be used by other products.
This search system does not support Windows 3.1.
This search system is not enabled for Thin Clients. If you attempt to search the documentation, you will receive a network error.
The following prerequisites are required:
For Windows 95, TCP/IP must be enabled as follows:
Note: | These configuration options will apply to all TCP/IP adapters even though they have only been changed for this one. You will not be able to use both LAN and Dial-Up without reconfiguring. |
Dial-Up networking TCP/IP properties for your internet service provider(s) (ISP) must be configured as documented by the ISP. The Dial-Up networking TCP/IP properties will override the properties in the Dial-Up Adapter TCP/IP properties configured via the "Network" icon in the Windows 95 Control Panel. The overriding of the properties will only take place so long as the Dial-Up Adapter TCP/IP properties are configured as above. Enabling the DNS in the Dial-Up Adapter TCP/IP properties or setting an IP address in the Dial-Up Adapter TCP/IP properties will interfere with the Dial-Up networking configuration for the ISP and must be avoided.
For Windows NT 4.0, either of the TCP/IP configurations detailed above will work. If you are running standalone, you can also enable the MS Loopback Adapter without the other two adapters.
If the search system was previously installed by another product (for example, VisualAge for Java), the search server must be stopped. To stop the search server, click on "Stop HTML Search Server" in the DB2 menu item of the Start Menu.
The search system is stored in its own directory because it may be used by other products. For example, if you installed DB2 UDB along with the search system on drive G, and later you installed VisualAge for Java on drive H, only one search system is installed -- the one that came first.
Some instructions in this document require you to specify the location of this directory. To locate this directory use this command:
echo %IMNINSTSRV%
The search server is assigned to port 49213, a number beyond the public ports assigned for TCP/IP. If you have another product that uses this port, you can change the search server port with the following steps:
DB2SET DB2PATH
<form action="http://localhost:49213/cgi-bin/db2sr*xx.exe" method="POST"> where: * is D for DBCS and S for SBCS xx is a two-character identifier for the language in which the documentation is written 49213 is the new port for localhost
If you install DB2 UDB on a LAN-connected drive on Windows 95 and the drive does not get reconnected before AUTOEXEC.BAT is run on Windows 95, the search system will not be able to have its environment variables set. The search system sets environment variables through a batch file (IMNENV.BAT) that resides in the search system directory (for example, (F:\IMNNQ_95). To bypass this problem, copy IMNENV.BAT from the search system directory to another drive/directory that is connected before AUTOEXEC.BAT is executed. Then modify AUTOEXEC.BAT to call this batch file on startup. For example, if you copy IMNENV.BAT to C:\WINDOWS\IMNNQ, you can add the following line to AUTOEXEC.BAT:
IF EXIST C:\WINDOWS\IMNNQ\IMNENV.BAT CALL IMNENV.BAT
If during DB2 UDB install you encounter a failed search server installation or initialization, the DB2 UDB install will proceed to completion. The following will assist in diagnosing what happened and what to do.
Look in the <temp>\imnnq\install directory, where <temp> is your system's %TEMP% directory, to find the IMNNQ.ERR file. If it does not exist, reboot and try installing the product again. If IMNNQ.ERR does exist, here is the possible contents:
1 - This indicates that the current PATH is too long and adding the search server into the PATH will cause the entire PATH to be erased. Note: The limit on Windows NT 4.0 is 512; on Windows 95 it is 255. It is recommended that you perform the following steps: a) Rename the PATH variable in the AUTOEXEC.BAT file (PATHGOOD), save changes and reboot. b) Remove the IMNNQ.ERR file from <temp>\imnnq\install. c) Run the product install again to install the search system properly. d) Merge the PATHGOOD variable with the PATH variable that was created by the latest installation. 2 - Miscellaneous error, please contact IBM service. 3 - Out of disk space error. Please ensure that there is at least 4.5MB of disk space for the search system plus enough space for AUTOEXEC.BAT to be changed for Windows 95.
If IMNNQ.ERR contains a message saying XXX.EXE DOES NOT EXIST, the search system executables could not be found. Try rerunning the search system install/initialization program again. See 20.3.4 , If Search Does Not Work (ERROR 500 when trying to search).
After you install the search system, it will start automatically after you reboot.
If the search server starts without any errors but you encounter the following error message in your browser, click "Stop HTML Search Server", then click on "Start HTML Search Server" in the DB2 menu item of the Start Menu.
A network error occurred: unable to connect to server. The server may be down or unreachable. Try connecting again later.
After you have finished searching, stop the search server to reclaim the memory it used. To stop it, click on "Stop HTML Search Server" in the DB2 menu item of the Start Menu.
If you use Netscape or Internet Explorer with proxies enabled manually, you can speed up search significantly by modifying your proxy information.
In Netscape 3: -------------- 1. Select Options - Network Preferences. 2. Click the Proxies tab. 3. Click View at the Manual Proxy Configuration selection. 4. In the "No proxies for" box, type: localhost:49213 If you have other entries here, separate them with commas. 5. Click OK to close the Manual Proxy Configuration window. 6. Click OK to exit the Preferences Window. In Netscape 4 (Communicator): ----------------------------- 1. Select Edit - Preferences. 2. Double-click Advanced in the Category tree. 3. Click Proxies in the Advanced subtree. 4. Click View at the Manual Proxy Configuration selection. 5. In the "Exceptions... Do not use proxy servers for domains beginning with" box, type: localhost:49213 If you have other entries here, separate them with commas. 6. Click OK to close the Manual Proxy Configuration window. 7. Click OK to exit the Preferences Window. In Internet Explorer 3: ----------------------- 1. Select View - Options. 2. Select Connection. 3. In the "Exceptions... Do not use proxy servers for domains beginning with" box, type: localhost:49213 If you have other entries here, separate them with commas. 4. Select the "Do not use proxy server for local (intranet) addresses" box. 5. Click OK to exit the Options Window. In Internet Explorer 4: ----------------------- 1. Select View... Internet Options. 2. Select the Connection tab. 3. Select the "Bypass proxy server for local (Intranet) access" check box. NOTE: this check box is only available if you are using a proxy or socks connection and have selected the "Access the Internet Using a Proxy Server" check box 4. Select the "Advanced" Button. 5. Type: localhost:49213 in the "Exceptions... Do not use proxy servers for addresses beginning with" box. If you have other entries here, separate the new entry with a semi-colon. 6. Select OK, then OK to exit the Options Tab.
Nameservers:
If you use a laptop that is normally connected to a LAN as a disconnected development platform, you may run into difficulty searching. To search successfully, you need to disable your nameserver, IP address in your TCP/IP configuration. In other words, you need two separate TCP/IP configurations--one for connected operations and another for disconnected ones.
Best fix for multiple TCP/IP configurations in Windows 95:
Windows 95 only allows you to have a single TCP/IP configuration. However, there are shareware utilities available on the internet that let you specify more than one setting, and then change them according to your connection status (connected or disconnected from a LAN). TCPSwitch is one of these programs.
Socks servers and proxies:
If you are using Netscape 3, and you dial-in to a secure site to access the internet with a proxy or socks server, you will need to delete these settings in Netscape before you can search. This is a bug with Netscape 3--Netscape Communicator's "direct connect" setting should fix this.
If the product installation worked, but searching does not work, try the following steps:
NQMAP -A //for SBCS TMMAP -A //for DBCS
The documentation for DB2 UDB is called DB2ADMxx, DB2APDxx, or DB2CONxx, where xx is a two-character identifier of the language in which the documents are written. One or more of these names should appear in the list of names that NQMAP (or TMMAP) returns.
With the list of indexes the NQMAP (or TMMAP) command provides, you can get additional details on each index with the following command:
IMNIXSTA <index_name> //for SBCS IMQIXSTA <index_name> //for SBCS
This command provides details such as the status of the index and the number of documents in the index.
If any of the above conditions are not true, you can rerun the DB2 UDB product installation program. If the only condition that is not true is that the files DB2SRCH.EXE, DB2HEAD.HTM and DB2FOOT.HTM are missing, then you can just copy them over from the directory <db2path>\misc into search system's directory, (for example, E:\IMNNQ_NT). The product installation program will rerun the search server's installation and initialization.
The search system searches predefined indexes. All of these indexes are installed with the search system, regardless of the DB2 UDB products you install. Therefore, when you conduct a search, some of the results might return a "File not found" error because they pertain to either a product that is not installed on your system, or, a document that you chose not to install during installation.
If you encounter this error:
EHS0410: An error was detected when starting the search service. Stop the service and start it again. Press Enter to continue... (Sometimes the error is EHS0411 but the text is the same. This message comes up on a full black screen.)
your hostname might have changed since doing the install, and you have to record it again. Go into the NetQuestion directory and enter:
type netq.cfg
to see what hostname NetQuestion is using. If you have changed the hostname, for example while configuring DHCP and DDNS, enter:
netqinit <data-path> Example: netqinit c:\imnnq_nt\data
Note: | Before uninstalling the DB2 UDB product, the search system must be stopped. To stop the search system, click on "Stop HTML Search Server" in the DB2 menu item of the Start Menu. |
The search system is uninstalled with the DB2 UDB product by clicking on the "Uninstall" item in the DB2 menu item of the Start Menu.
If the search system does not get uninstalled, the cause may be:
To determine which products are still registered with the search system, issue the following command:
NQMAP -A //for SBCS TMMAP -A //for DBCS
If this command returns no indexes, skip to step 6.
If the list contains indexes that do not belong to DB2 UDB (that is, their names begin with something other than "DB2"), you cannot remove the search server.
If it contains any of the DB2 index file names (DB2ADMxx, DB2APDxx, DB2CONxx), DB2 UDB could not unregister the indexes therefore causing the search system uninstall to fail. This occurs if DB2 UDB was incorrectly uninstalled (for example, the SQLLIB folder was deleted). In this case, you have to manually unregister the indexes and remove the search system directory using the following steps:
IMNSS START SERVER //for SBCS IMQSS.EXE -START DBCSHELP //for DBCS
NQMAP -D <index_name> //for SBCS TMMAP -D <index_name> //for DBCS
NQDELET <index_name> //for SBCS TMDELET <index_name> //for DBCS
IMNSS STOP SERVER //for SBCS IMQSS.EXE -STOP DBCSHELP //for DBCS
NQCOUNTI <search system directory> //for SBCS TMCOUNTI <search system directory> //for DBCS
Note: | See 20.2.3 , Locating the Search System Directory if you do not know where it is installed. |
\\HKEY_LOCAL_MACHINE\SOFTWARE\IBM\NetQuestion.
Remove this entry including all its subtrees.
After a DB2 UDB uninstall, it is very important to reboot before doing another install. The reason is that some search system DLLs can be held by the operating system and are not removed until the next reboot. If a a search system install happens before the reboot, the newly installed search system DLLs will be deleted on the next reboot, rendering the search system unusable.
In some environments, previous installs or uninstalls of the IMNSearch (NetQuestion) component make it impossible to install the component again. In short, the component requires that special userID and groupID names exist to install correctly. Occasionally (usually when IMNSearch has been incompletely removed) one or the other of these IDs does not exist. In other cases, users sometimes do not have the right access needed to create these group IDs and user IDs successfully at installation. Both of these conditions result in the IMNSearch component failing to install properly.
Problems installing IMNSearch for the first time on the machine
If you experience a problem installing the IMNSearch component, and you do not have a previous version of the product already installed, then please completely remove the component and then re-install. Note that this includes removing the imnadm userID and groupID before reinstalling.
Problems installing IMNSearch when another IMNSearch component is already installed, or was previously installed
If you experience a problem installing the IMNSearch component, and you have another version of the IMNSearch component already installed (or have had it installed in the past), then do not remove the existing IMNSearch component. Instead, check to see if there is a group ID called imnadm. If there is, then you can either:
Problems installing IMNSearch in an AFS/DCE or NIS/NIS+ environment
In environments where user ID and group ID management is centralised, that is, IDs are created on a server by an authorised administrative user, who then distributes the ID information to client machines, the imnadm user IDs and group IDs have to be created prior to users installing the IMNSearch component. Since these user IDs and group IDs are shared among all products that make use of the IMNSearch component, this step needs to be performed once prior to installing the first product that uses the component. After the last product that uses the IMNSearch component is un-installed, then the user ID and group ID should be removed.
When installing DB2 Universal Database Version 5.2 for HP-UX 10, you may find that NetQuestion cannot be installed from the product CD-ROM. You may discover that when you attempt to search the DB2 product documentation, you get a message stating that the "connection is refused by server". Check if NetQuestion is installed with the following command:
swlist -l product | grep -i imn
If nothing is returned, then NetQuestion has not been installed.
In this case, you must install NetQuestion manually. Note that you must be root to install NetQuestion.
Copy the entire /NetQ directory from the installation CD-ROM to your local hard drive. After copying is finished, you should have the following files in your local /NetQ/hpux10 directory: IMNSearch, IMNSearchB, IMNpkgadd, IMNpkgaddB, imnbinstall and imninstall. Enter:
imninstall
to start the NetQuestion installation script. Follow the on-screen directions to install either the SBCS search engine alone or the SBCS and DBCS search engines together.
Troubleshooting
If imninstall returns an error stating "swcopy failed, see manual pages for swcopy", it is possible that some other file systems completely unrelated to DB2 have failed to mount the system. In this case, you must (as root) edit your local copy of /NetQ/hpux10/IMNpkgadd by adding the option:
-x mount_all_filesystems=false
to each occurrence of swcopy, swremove or swinstall.
Neither online documentation nor NetQuestion is shipped on UNIX product CDs containing just the Client Application Enabler. However, if you install the CAE from any EE or EEE product CD you can also select to install online documentation; NetQuestion is installed automatically whenever documentation is installed.
DB2 Universal Database Version 5.2 for AIX uses a level of the NetQuestion double-byte character set (DBCS) search engine that will not work properly if certain other levels of NetQuestion are already present on your AIX operating system.
For example, if DB2 UDB V5.2 for Japanese, Korean, Chinese or Taiwanese is installed on any AIX operating system where DBCS documentation for VisualAge C++ Professional Version 4.0 for AIX is already installed, or if it is installed on an AIX Version 4.3 operating system where DBCS native help is already installed, you will not be able to use NetQuestion to search any DB2 DBCS documentation you may have installed. This problem is due to the fact that the search CGI used by other NetQuestion-exploiting products is incompatible with the search CGI shipped with DB2 UDB Version 5.2 for AIX.
To correct this problem, you must replace the search CGI in /var/docsearch/cgi-bin/ and /usr/docsearch/cgi-bin/ with an updated search CGI by doing the following:
ftp.software.ibm.com/ps/products/db2/info
to /var/docsearch/cgi-bin and /usr/docsearch/cgi-bin. This will overwrite the existing db2srdbcs files in these directories.
usr/IMNSearch/bin imqss -stop dbcshelp.
usr/IMNSearch/bin imqss -start dbcshelp.
The following SCOhelp information is available:
You can access SCOhelp in one of the following ways:
The "UnixWare 7 Update 1 (7.0.1)" CD-ROM provides, among other things, fixes for the Spanish Language Extensions (esle). The following DB2 procedures will not work when LANG is set to "es", unless the UnixWare 7 Update 1 is installed:
db:234:once:/etc/rc.db2 > /dev/console 2>&1 # Autostart DB2 Services
If the default value of LANG is "es", this line should be commented out from the /etc/inittab file. Once the reboot has completed, log on as root, set LANG to "en" or "C", then run /etc/rc.db2. Set LANG back to "es" once DB2 has started.
Note that once UnixWare 7 Update 1 has been installed to fix problems for the Spanish Language Extensions, these problems will disappear. You should uncomment out the line in the etc/inittab file, so that DB2 services will autostart on the next reboot:
db:234:once:/etc/rc.db2 > /dev/console 2>&1 # Autostart DB2 Services
If you want to change the language SCOhelp returns by default on the server, edit /usr/ns-home/httpd-scohelphttp/config/lang.conf and set DocumentRoot to the desired default language. For instance:
DocumentRoot /usr/lib/scohelp/fr
Once this change is made, you must restart the web server, with the command:
/usr/ns-home/httpd-scohelphttp/restart
If this does not work, try stopping the web server, then restarting it, with the commands:
/usr/ns-home/httpd-scohelphttp/stop /usr/ns-home/httpd-scohelphttp/start
Different clients can have SCOhelp return information in different languages.
The SCOhelp server uses the *httpAcceptLanguage resource to determine which language to use when a request is received. You can set *httpAcceptLanguage to the required language as follows:
*httpAcceptLanguage: fr, en
If English is still displayed by SCOhelp, even after *httpAcceptLanguage is set, try removing English from the list altogether.
Note that there is no search in Simplified Chinese and Brazilian Portuguese.
You can select multiple languages to search by clicking on the "Options" button on the SCOhelp search panel. The list of installed languages is displayed; select the set of languages to search.
There is no direct link from the Web Control Center help to the DB2 books (displayed using SCOhelp). Use one of the methods described in 22.1 , How to Access SCOhelp to run SCOhelp and access the DB2 books.
There is no search in the SCO DB2 Web Control Center help.
The "Support Level Supplement (SLS) PTF7004A" is available from the SCO ftp site at: ftp://ftp.sco.com/SLS/ Download the files ptf7004a.Z and ptf7004a.txt
PTF7004A "corrects problems with the configuration of the Verity search engine within SCOhelp and with the localization of various utilities and SCOadmin managers." You are required to install this SLS if you will be viewing the DB2 documentation. See the ptf7004a.txt file for more details.
Once the PTF7004A is installed, and before you use SCOhelp in Simplified Chinese or Brazilian Portuguese, you must edit the file /usr/ns-home/httpd-scohelphttp/config/lang.conf and make the following changes:
LocalizedDocRoot pt_PT.ISO8859-1 /usr/lib/scohelp/pt_BR LocalizedDocRoot zh_ZH.ISO8859-1 /usr/lib/scohelp/zh_CN
LanguageAlias zh zh_ZH.ISO8859-1
This value then matches the line:
LocalizedDocRoot zh_ZH.ISO8859-1 /usr/lib/scohelp/zh_CN
Once these changes are made restart (or stop and start) the web server, as described in 22.2.2 , Changing Default SCOhelp Language on the Server.
SCOhelp stores all of its search index files in the /usr/lib/scohelp/<lang>/_SearchIndex/parts directory.
These files will continue to grow in size as you install products, but will not get smaller as you remove products (using the current UnixWare 7 utilities). This is the case for DB2 as well as other products.
If you delete these files, your SCOhelp search will no longer work properly.