IBM Books

Hardware Planning and Control Guide


CSM hardware control

IBM Cluster Systems Management (CSM) for Linux hardware control software provides remote power and remote console functions for CSM cluster nodes from a single point of control. CSM allows the system administrator to control cluster nodes remotely through access to the cluster management server. CSM for Linux is currently only available as preinstalled software on IBM xSeries(TM) 330 and 342 hardware.

CSM hardware control includes the remote power and remote console functions. The remote power rpower command is used to power on, power off, reboot, and query power status of cluster nodes. The remote console rconsole command is used to open a console to a cluster node from the management server. The rpower and rconsole commands must be run from the management server, and either command can be run to affect multiple nodes simultaneously. See the command man pages or the IBM CSM for Linux: Administration Guide for detailed command usage information.

CSM hardware control functions depend on the specific hardware, software, network, and configuration requirements described in this book. The requirements for remote power are separate and distinct from those for remote console. See CSM hardware and network requirements for a description of the remote power and remote console hardware and network requirements. See Remote power software and configuration and Remote console software and configuration for descriptions of the remote power and remote console software and configuration requirements.

Linux clusters without the hardware, software, network, or configuration required to use remote power and remote console functions can still have CSM installed on some or all cluster nodes. However, on such clusters the rpower and rconsole commands may be inoperable or provide only limited function.


CSM hardware and network requirements

CSM hardware control depends on the specific hardware and network requirements described in this book. A management server is the single point of control for a CSM cluster, and a system administrator runs most CSM commands from the management server using the command line. The management server can be connected to the cluster nodes and external networks using various configurations of IBM and non-IBM hardware and software that meet the CSM architecture requirements described in this book.

The rpower command communicates with a node's hardware control point to request node power status, reboot, and power on and off functions. Hardware control points should be on the management virtual LAN (VLAN) and connected to the hardware that ultimately controls the power functions. The IBM Remote Supervisor Adapter (RSA) is the type of management processor adapter (MPA) hardware control point that can currently be used with CSM.

The rconsole command communicates with console server hardware to open a console window for a node on the CSM management server. Console servers must be on the management VLAN and connected to node serial ports. This out of band network configuration allows a remote console to be opened from the management server even if the network is inaccessible. For example, if the cluster VLAN is offline, remote console can still access the target node to open a console window. Console server types that can currently be used with CSM are the Equinox ESP-8 and ESP-16 serial hubs, Equinox ELS-16 II terminal server, Computone IntelliServer RCM8, and Avocent CPS1600.

Figure 1 shows a network partitioned into three virtual LANs (VLAN); public, cluster, and management. The public VLAN connects the cluster nodes and management server to the site network. Applications are accessed and run on cluster nodes over the public VLAN. The public VLAN can be connected to nodes through a second Ethernet adapter in each node, or by routing to each node through the Ethernet switch. The cluster VLAN connects nodes to each other and to the management server. Installation and CSM administration tasks such as running dsh are done on the cluster VLAN. Host names and attribute values for nodes on the cluster VLAN are stored in the CSM database. The rpower and rconsole commands are run on the management VLAN (Mgt VLAN), which connects the management server to the cluster hardware.

For optimal security, the management VLAN must be restricted to the remote hardware being controlled (hardware control points and remote terminal servers) and the management server. User access to the management server should be restricted to root and admin users only. Routing between the management VLAN and cluster or public VLANs could compromise security on the management VLAN.

The management server in Figure 1 connects to the management and cluster VLANs through Ethernet adapters. The terminal server, an Equinox Serial Provider (ESP) in this example, connects to the management VLAN through its Ethernet adapter, and to the cluster nodes through their serial (COM) ports. An ESP-16 can connect up to 16 nodes; other terminal servers may have different capacities. The nodes must be connected to the cluster VLAN through their first Ethernet adapters (eth0), and directly or indirectly to an IBM Remote Supervisor Adapter (RSA). The management VLAN connects to the RSA in select nodes; one RSA is required per 10 nodes. An RSA connects to its node internal service processor (ISP) port, and up to nine node ISP ports can be daisy-chained from the RSA ISP port. Configuration for a public VLAN is flexible and can be defined by the system administrator.

Note:
For remote console to work correctly, CSM currently requires that COM port B is used for the serial connection on both the x330 and x342 cluster nodes. On x330 nodes, the cable must be switched from the default COM Port A to COM Port B before installing the node. On x342 nodes, the external serial cable must be connected to COM Port B.

The following diagram (Figure 1) shows the hardware and networking configuration required for using CSM hardware control with IBM xSeries 330 nodes. Figure 3 shows the required configuration for IBM xSeries 342 nodes. See CSM hardware node attributes for example node attribute definitions corresponding to Figure 1.

Notes:

  1. There are two MP ports on x330 nodes. The daisy chain connection is set up so that the MP Port A (on clsn01) is connected to the MPA PCI (on mgtn03) through an external dongle and the MP Port B (on clsn01) is connected to Port A (on clsn02), and so on. This leaves Port B on the last node (clsn10) open.

  2. ESP console servers are physically numbered 1-16, but the corresponding ttys created by ESP software are logically numbered 0-f.

Figure 1. CSM hardware control hardware and networking configuration for IBM xSeries 330 nodes

topView figure.

The following diagram (Figure 2) shows the relationship between the CSM node database attributes and the internal hardware names used in Figure 1. For remote power and remote console to work as expected, this matching of database attribute names to the internal hardware values must be correct for all management processors (MPs), management processor adapters (MPAs), and console serial providers in the CSM cluster.

Figure 2. CSM hardware control database attributes for IBM xSeries 330 nodes

topView figure.

The following diagram (Figure 3) shows the hardware and networking configuration required for using CSM hardware control with IBM xSeries 342 nodes. An xSeries 342 node can be connected to the management VLAN using an MP or an MPA. Each MPA can have its own connection to the management VLAN (mgtn03 - mgtn12), or up to nine MPAs can be daisy-chained from one MPA that is connected to the management VLAN (mgtn13). See Figure 4 for a more detailed view of three nodes from Figure 3.

Note:
ESP console servers are physically numbered 1-16, but the corresponding ttys created by ESP software are logically numbered 0-f.

Figure 3. CSM hardware control hardware and networking configuration for IBM xSeries 342 nodes

topView figure.

The following diagram (Figure 4) is a detailed view of three nodes from Figure 3. The diagram shows the relationship between the CSM node database attributes and the internal hardware names used in Figure 3. For remote power and remote console to work as expected, this matching of database attribute names to the internal hardware values must be correct for all management processors (MPs), management processor adapters (MPAs), and console serial providers in the cluster.

Figure 4. CSM hardware control database attributes for IBM xSeries 342 nodes

topView figure.

Hardware and networking requirements

CSM for Linux is an integral part of the IBM e(logo)server Cluster 1300 platform for deploying applications on Linux clusters. CSM also runs on IBM e(logo)server xSeries models 330 and 342. For CSM hardware control to function properly, the following hardware and hardware specifications are required:

For specific hardware control point and console server product details, see the documentation shipped with the hardware, or the product Web site URLs listed in Related information.

IBM supports a CSM cluster with up to 256 nodes. IBM suggests a networking configuration where each console server is connected to the management VLAN through its Ethernet port, and to 16 or fewer nodes through the nodes' serial or COM ports. IBM suggests that each IBM RSA PCI be connected to the management VLAN through its Ethernet port. However, to conserve IP addresses one IBM RSA PCI could be connected to the management VLAN with up to nine management processors daisy-chained from it to the management VLAN. For optimal security, CSM cluster hardware control functions must be restricted to users with root access by isolating the management server network.

CSM hardware node attributes

CSM hardware planning is facilitated by filling out a table describing all cluster hardware node attribute values; see Table 1 for one example. The cluster used in this hardware node attributes example includes 20 nodes; attribute values for the first 16 nodes correspond to the hardware and network configuration shown in Figure 1. See Remote power software and configuration and Remote console software and configuration for detailed descriptions of the attributes described in Table 1. See the IBM CSM for Linux: Software Planning and Installation Guide for a blank node attributes planning worksheet.

Note:
IBM suggests changing the default hardware user IDs and passwords that are shipped with the preinstalled CSM cluster nodes. The hardware control node IDs should each be set to a unique value, such as the short host name of the node. Use the IBM Universal Manageability Services tool (see Related information for the URL) to change hardware control node IDs and passwords to unique values for all nodes. Then use the systemid command to record the changed ID and password information in the CSM database.


Table 1. Hardware node attributes example

Hostname HWControlPoint Power Method HWControl NodeID ConsoleServerName Console Server Number Console Method Console PortNum
clsn01.pok.ibm.com mgtn03.pok.ibm.com netfinity clsn01 mgtn02.pok.ibm.com 1 esp 0
clsn02.pok.ibm.com mgtn03.pok.ibm.com netfinity clsn02 mgtn02.pok.ibm.com 1 esp 1
clsn03.pok.ibm.com mgtn03.pok.ibm.com netfinity clsn03 mgtn02.pok.ibm.com 1 esp 2
clsn04.pok.ibm.com mgtn03.pok.ibm.com netfinity clsn04 mgtn02.pok.ibm.com 1 esp 3
clsn05.pok.ibm.com mgtn03.pok.ibm.com netfinity clsn05 mgtn02.pok.ibm.com 1 esp 4
clsn06.pok.ibm.com mgtn03.pok.ibm.com netfinity clsn06 mgtn02.pok.ibm.com 1 esp 5
clsn07.pok.ibm.com mgtn03.pok.ibm.com netfinity clsn07 mgtn02.pok.ibm.com 1 esp 6
clsn08.pok.ibm.com mgtn03.pok.ibm.com netfinity clsn08 mgtn02.pok.ibm.com 1 esp 7
clsn09.pok.ibm.com mgtn03.pok.ibm.com netfinity clsn09 mgtn02.pok.ibm.com 1 esp 8
clsn10.pok.ibm.com mgtn03.pok.ibm.com netfinity clsn10 mgtn02.pok.ibm.com 1 esp 9
clsn11.pok.ibm.com mgtn04.pok.ibm.com netfinity clsn11 mgtn02.pok.ibm.com 1 esp a
clsn12.pok.ibm.com mgtn04.pok.ibm.com netfinity clsn12 mgtn02.pok.ibm.com 1 esp b
clsn13.pok.ibm.com mgtn04.pok.ibm.com netfinity clsn13 mgtn02.pok.ibm.com 1 esp c
clsn14.pok.ibm.com mgtn04.pok.ibm.com netfinity clsn14 mgtn02.pok.ibm.com 1 esp d
clsn15.pok.ibm.com mgtn04.pok.ibm.com netfinity clsn15 mgtn02.pok.ibm.com 1 esp e
clsn16.pok.ibm.com mgtn04.pok.ibm.com netfinity clsn16 mgtn02.pok.ibm.com 1 esp f
clsn17.pok.ibm.com mgtn04.pok.ibm.com netfinity clsn17 mgtn06.pok.ibm.com n/a cps 1
clsn18.pok.ibm.com mgtn04.pok.ibm.com netfinity clsn18 mgtn06.pok.ibm.com n/a cps 2
clsn19.pok.ibm.com mgtn04.pok.ibm.com netfinity clsn19 mgtn06.pok.ibm.com n/a cps 3
clsn20.pok.ibm.com mgtn04.pok.ibm.com netfinity clsn20 mgtn06.pok.ibm.com n/a cps 4


[ Top of Page | Previous Page | Next Page | Table of Contents | Index ]