Administration Guide
To assist in the planning for the installation of HACMP ES on DB2 UDB, a
step-by-step overview of the installation and migration processes is presented
here.
When planning for and implementing HACMP ES in an environment where you
have not installed HACMP before, you should consider the following
tasks:
- Install the AIX operating system on each of the SP nodes according to the
SP Installation and Administration Guides. Ensure proper paging space
is available on both the control workstation and each of the SP nodes.
Also ensure switch configuration has been considered and implemented along
with any other modifiable configuration parameters. In addition, SP
monitoring (Perspectives) you desire to use should be put in place.
Ensure the SP dsh, pcp, and pexec commands work.
- Design your database layout. This should, at a minimum, include the
number of nodes to be used, the mapping of DB2 database partitions to physical
nodes, the disk requirements per node/partition, and table space
considerations. You should also consider who the main DB2 instance
owner will be and the access authorization this and other users will
require.
- Plan your external SSA disk configuration including redundant adapters,
mirrored disks, and the twin-tailing of disks.
- Using your database layout and SSA configuration, complete the HACMP
worksheets found in the HACMP Planning, Installation, and Administration
Guides. Using these worksheets, you should be able to complete the
worksheets later in this document.
- Implement your external SSA disk configuration. Make sure microcode
levels are consistent across all drives and use the Maymap utility for
validating and filling in any gaps in your worksheets.
- Install DB2 UDB EEE on each SP node.
- Install HACMP ES on each SP node.
- Install the DB2 UDB EEE HACMP ES on SP Package using the
db2_inst_ha command.
- Create the DB2 main instance user and validate it can access all
nodes. This is not a highly available user at this point. This
can be temporarily a SP user on the SP control workstation.
- Create your DB2 instance and database. Ensure it is operating by
using db2start command. Then ensure it is stopped by using
db2stop before proceeding to the next step.
- If you wish to implement or load the database before adding HACMP, then
you should do this now.
- Configure HACMP ES on the SP nodes topology and resource groups according
to the HACMP worksheets and the information in this document.
- Beginning with your NFS server node for the DB2 main instance user, change
this user (by modifying /etc/security/user and
/etc/passwd on all nodes in accordance with what is specified in
this document. This user will become a highly available NFS user; and
this node and its backup will update /etc/exports. All nodes
will be able to mount this directory using NFS (with an entry in
/etc/filesystems on each node) through the switch alias IP
addresses.
- "Tar" the home directory of the main instance user and "un-tar"
the home directory in the new location.
- Create a NFS filesystem on each of the SP nodes to mount a new main
instance home directory.
- Start HACMP on the NFS server node. Verify that it comes up
successfully by investigating /tmp/hacmp.out. The
ha_mon command can be used to monitor this file as it is
written.
- Bring up the other nodes one at a time; verifying each successful
completion by investigating /tmp/hacmp.out. The
ha_mon command can be used to monitor this file as it is
written.
- Setup the optional monitoring through Perspectives and Problem
Management.
- Validate failover functionality on each node by simulating a concurrent
maintenance action on each node. The ha_cmd nodenum TAKE can
be used to stop HACMP gracefully with takeover. Verify the takeovers
and reintegrations succeed by interrogation of
/tmp/hacmp.out and your monitoring tools.
If you are migrating from a non-HACMP installation to one with HACMP, you
should review the step-by-step overview that follows:
- Convert your existing external disks to a highly-available, twin-tailed,
mirrored configuration. Add any extra hardware and disks to achieve
this configuration remembering that names of different logical volumes on
different nodes must be unique when they are twin-tailed.
This applies to volume groups, logical volumes, and filesystems.
- Complete the HACMP planning and the related worksheets. Also,
complete the worksheets in this document.
- Implement your external SSA disk configuration changes. Ensure
microcode levels are consistent across all drives and use the Maymap utility
to validate and eliminate any gaps in the worksheets.
Note: | SSA disks in a RAID5 configuration is supported. Two SSA adapters in
the same RAID loop is the only configuration permitted. For a HACMP
configuration with the RAID disks twin-tailed, only one adapter per node is
supported. In this configuration, the adapter is a single point of
failure for access to the disks, and extra configuration is recommended to
detect the adapter outage adn promote this to an HACMP failover event.
AIX error notification is the simplest way to configure a node for fail over
should the SSA adapter fail. Refer to HACMP for
AIX, V4.2.2, Enhanced Scalability Installation and
Administration Guide for more information on AIX error notification.
|
- Install HACMP ES on each SP node.
- Install the "DB2 UDB EEE HACMP ES on SP" Package using the
db2_inst_ha command.
- Configure HACMP ES on the SP nodes topology and resource groups according
to the HACMP worksheets and the information in this document.
- Beginning with your NFS server node for the DB2 main instance user, change
this user (by modifying /etc/security/user and
/etc/passwd on all nodes in accordance with what is specified in
this document. This user will become a highly available NFS user; and
this node and its backup will update /etc/exports. All nodes
will be able to mount this directory using NFS (with an entry in
/etc/filesystems on each node) through the switch alias IP
addresses.
- "Tar" the home directory of the main instance user and "un-tar"
the home directory in the new location.
- Create a NFS filesystem on each of the SP nodes to mount a new main
instance home directory.
- Start HACMP on the NFS server node. Verify that it comes up
successfully by investigating /tmp/hacmp.out. The
ha_mon command can be used to monitor this file as it is
written.
- Bring up the other nodes one at a time; verifying each successful
completion by investigating /tmp/hacmp.out. The
ha_mon command can be used to monitor this file as it is
written.
- Setup the optional monitoring through Perspectives and Problem
Management.
- Validate failover functionality on each node by simulating a concurrent
maintenance action on each node. The ha_cmd nodenum TAKE can
be used to stop HACMP gracefully with takeover. Verify the takeovers
and reintegrations succeed by interrogation of
/tmp/hacmp.out and your monitoring tools.
The worksheets below are designed to be used with the HACMP worksheets that
were filled out in preparation for your configuration.
In each of two cases, first a worksheet is filled out to give you an idea
of how to plan your configuration. Secondly, a blank sample worksheet
is provided for your use.
The database configuration on external disks documented in the first sample
worksheet is shown in the following figure. The database statement used
to create the database was:
db2 create database pwq on /newdata
Both SSA external adapters and external SSA disks are mirrored and
twin-tailed for logical volumes with no single point of failure. The
diagram pictured is quite similar to the output of the maymap
command. Maymap is a utility available through AIXTOOLS to show the
external SSA disk configuration. Use of this utility is recommended as
part of planning your setup.
Figure 62. Sample DB2 4-node Database External Disks Setup
Before you review the following table, you are expected to have thoroughly
read the HACMP documentation regarding the quorum settings on volume groups
and mirrored write consistency settings on logical volumes. The
settings used for both will directly affect your availability and
performance. Ensure you review these settings and understand their
implications. The typical setting for both "quorum" and
"mirrored write consistency" is "off".
Table 34. HACMP Volume Groups, Logical Volumes, and Filesystems
SP Node
| Volume Group Name
| PP Size (MB)
| Logical Volume Name
| # of PPs
| Cop -ies
| hdisk list
| Filesystem Mount Point (MB)
| Filesystem Log logical volume
| Node Description and backup
| user owner of /dev logical device
|
3
| havg3
| 8
| hlv300
| 10
| 2
| hdisk1 hdisk5
| /newdata /pwq /NODE0003
| hlog301
| Catalognode mount point; node 4
| root *
|
3
| havg3
| 8
| hlog301
| 1
| 2
| hdisk1 hdisk5
| N/A
| N/A
| Catalognode jfslog; node 4
| root *
|
3
| havg3
| 8
| hlv301
| 10
| 2
| hdisk2 hdisk6
| N/A
| N/A
| Catalognode rawtemp space; node 4
| pwq **
|
4
| havg4
| 8
| hlv400
| 10
| 2
| hdisk3 hdisk7
| /dbmnt
| hlog401
| nfsserver pwq home; node 3
| root *
|
4
| havg4
| 8
| hlog401
| 1
| 2
| hdisk3 hdisk7
| N/A
| N/A
| nfsserver jfslog; node 3
| root *
|
5
| havg5
| 8
| hlv500
| 10
| 2
| hdisk1 hdisk9
| /newdata/ pwq/ NODE0005
| HLOG501
| Dbnode5 mount point; node 6
| root *
|
5
| havg5
| 8
| hlog501
| 1
| 2
| hdisk1 hdisk9
| N/A
| N/A
| Dbnode5 jfslog; node 6
| root *
|
5
| havg5
| 8
| hlv501
| 10
| 2
| hdisk2 hdisk10
| N/A
| N/A
| Dbnode5 raw temp space; node 6
| pwq **
|
5
| havg5
| 8
| hlv502
| 100
| 2
| hdisk2 hdisk10
| N/A
| N/A
| Dbnode5 raw table space; node 6
| pwq **
|
5
| havg5
| 8
| halv503
| 100
| 2
| hdisk3 hdisk11
| N/A
| N/A
| Dbnode5 raw table space; node 6
| pwq **
|
5
| havg5
| 8
| halv504
| 100
| 2
| hdisk3 hdisk11
| N/A
| N/A
| Dbnode5 raw table space; node 6
| pwq **
|
5
| havg5
| 8
| halv505
| 100
| 2
| hdisk4 hdisk12
| /dbdata5
| hlog501
| Dbnode6 system table space; node 6
| root *
|
6
| havg6
| 8
| hlv600
| 10
| 2
| hdisk5 hdisk13
| /newdata/ pwq/ NODE0006
| hlog601
| Dbnode6 mount point; node 5
| root *
|
6
| havg6
| 8
| hlog601
| 1
| 2
| hdisk5 hdisk13
| N/A
| N/A
| Dbnode6 jfslog; node 5
| root *
|
6
| havg6
| 8
| hlv601
| 10
| 2
| hdisk6 hdisk14
| N/A
| N/A
| Dbnode6 raw temp space; node 5
| pwq **
|
6
| havg6
| 8
| hlv602
| 100
| 2
| hdisk6 hdisk14
| N/A
| N/A
| Dbnode6 raw table space; node 5
| pwq **
|
6
| havg6
| 8
| hlv603
| 100
| 2
| hdisk7 hdisk15
| N/A
| N/A
| Dbnode6 raw table space; node 5
| pwq **
|
6
| havg6
| 8
| hlv604
| 100
| 2
| hdisk7 hdisk15
| N/A
| N/A
| Dbnode6 raw table space; node 5
| pwq **
|
6
| havg6
| 8
| hlv605
| 100
| 2
| hdisk8 hdisk16
| /dbdata6
| hlog601
| Dbnode6 system table space; node 5
| root *
|
Notes:
- * jfs filesystem logical volumes and logs keep root permissions.
- ** raw database spaces get database user permissions on /dev raw file
entries (/dev/rxxxx).
|
Table 35. HACMP Volume Groups, Logical Volumes, and Filesystems (blank)
SP Node
| Volume Group Name
| PP Size (MB)
| Logical Volume Name
| # of PPs
| Cop -ies
| hdisk list
| Filesystem Mount Point (MB)
| Filesystem Log logical volume
| Node Description and backup
| user owner of /dev logical device
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Table 36. Planning HACMP NFS Server
SP Node
| External Filesystem
| Back up node
| SP switch boot and service IP alias pairs
| filesystem to mount (/etc/ filesystems)
| filesystem to specify as database home directory
| addresses to export filesystem to (/etc/ exports)
|
3
| /dbmnt
| 4
| nfs_boot_3 nfs_client_3
| nfs_server:/ dbmnt as /dbi
| /dbi/pwq
| nfs_boot_3 nfs_client_3 nfs_server_boot nfs_server nfs_boot_5
nfs_client_5 nfs_boot_6 nfs_client_6
|
4
| /dbmnt
| 3
| nfs_server_boot nfs_server
| nfs_server:/ dbmnt as /dbi
| /dbi/pwq
| nfs_boot_3 nfs_client_3 nfs_server_boot nfs_server nfs_boot_5
nfs_client_5 nfs_boot_6 nfs_client_6
|
5
| N/A
| N/A
| nfs_boot_5 nfs_client_5
| nfs_server:/ dbmnt as /dbi
| /dbi/pwq
| N/A
|
6
| N/A
| N/A
| nfs_boot_6 nfs_client_6
| nfs_server:/ dbmnt as /dbi
| /dbi/pwq
| N/A
|
Notes:
- /etc/passwd must be the same on all nodes. This can be
synchronized from the control workstation.
- Ensure the external filesystem has the permission of the database instance
owner.
- The /etc/filesystems must have the mount parameters:
hard, bg, intr, and rw.
- The /etc/exports will have
-root=ip1:ip2:ip3
only on the server and its backup.
|
Table 37. Planning HACMP NFS Server (blank)
SP Node
| External Filesystem
| Back up node
| SP switch boot and service IP alias pairs
| filesystem to mount (/etc/ filesystems)
| filesystem to specify as database home directory
| addresses to export filesystem to (/etc/ exports)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
[ Top of Page | Previous Page | Next Page | Table of Contents | Index ]
[ DB2 List of Books |
Search the DB2 Books ]