Remote Control Guide and Reference


Cluster Systems Management Remote Control Overview

IBM Cluster Systems Management for Linux (CSM) Remote Control software allows a system administrator to control nodes in a Linux cluster from a remote location. This essentially frees the CSM cluster from any restrictions associated with geographic node location. The two main functions for CSM Remote Control are the remote power and remote console commands. The rpower command allows an administrator to query, power on, power off, and reset remote nodes. The rconsole command allows an administrator to open a console for a remote node. The CSM administrator runs the rpower and rconsole commands from a control node called the management server. See the man pages or the IBM Cluster Systems Management for Linux Technical Reference for detailed command usage information.


Hardware Configuration

CSM Remote Control cluster software is dependent upon the hardware configuration. For IBM Netfinity(R) and xSeries(TM) clusters, CSM hardware control point and internal service processor database attribute values must match the Remote Supervisor adapter (RSA) host names and Internal Service Processor (ISP) text IDs, respectively. This will ensure that the remote control software understands the physical connections and can properly control the target nodes.

The remote power command, rpower, is dependent on the physical cabling of the Remote Supervisor adapters and when applicable, the ISPs they control. It is also dependent on the adapter host names and the ISP Text IDs. The remote console command, rconsole is dependent on the cabling of the remote console server and the cabling description in the CSM database. These details will be explained in the remote power and remote console sections to help you understand the interdependencies between the hardware and software. With the correct definitions, the rpower and rconsole commands will target the intended node or node group. (You can control hardware other than the supported IBM hardware by writing custom power and console methods, which is discussed in later sections.)

CSM for Linux is an integral part of the IBM e(logo)server Cluster 1300 platform for deploying Linux applications requiring a cluster. The IBM e(logo)server Cluster 1300 includes the following hardware:


Networking Configuration

For security reasons, the networking configuration must separate the remote control functions rpower and rconsole from other clusters functions. An efficient way to do this is to create a virtual LAN (VLAN) for the remote control functions, which is separate and distinct from the more general purpose VLAN connecting the cluster's nodes. Optionally, the cluster VLAN can be isolated from the larger network. See the following sections for more details.

Management VLAN

The management VLAN connects the management server to the cluster's terminal server(s) and to the Remote Supervisor adapter(s) installed in some or all of the nodes. Since this is intended to be an isolated network, traffic flows without encryption using clear text authentication. Access to the rpower and rconsole commands is limited to root user on the management server. All other nodes have no access to the Management VLAN.

Cluster VLAN

A cluster VLAN subdivides the Ethernet switch so each node can use the cluster VLAN for Network I/O (NFS, tftp, ftp) and job control traffic.

Public VLAN

Each node connects to a public VLAN to allow authorized access to the nodes in the cluster. You may choose to combine the cluster VLAN and public VLAN.


Hardware and Networking Configuration Diagrams

The following diagram (Figure 1) shows the hardware and networking configuration required for using CSM remote control with IBM xSeries 330 nodes. (Figure 3 shows the configuration for IBM xSeries 342 nodes.) See Node Attributes Table for example node attribute definitions corresponding to Figure 1.

In Figure 1, the Management Server connects to the Management VLAN and the Cluster VLAN through Ethernet adapters. The terminal server, an Equinox Serial Provider (ESP) in this example, connects to the Management VLAN through its Ethernet adapter, and to the cluster nodes through their serial (COM) ports as shown. (An ESP-16 can connect up to 16 nodes. Other terminal servers may have different capacities.) The nodes must be connected to the Cluster VLAN through their Ethernet adapters, and directly or indirectly to an IBM Netfinity Remote Supervisor adapter (RSA). The Management VLAN connects to the RSA in select nodes. (One RSA is required for every 10 nodes.) The RSAs connect to their own node's ISP port, and up to 9 more node ISP ports are daisy-chained from there. Configuration for a Public VLAN is optional and can be defined by the system administrator.

Note:
Figures 1-4 assume an installation where eth0 is defined as the first Ethernet adapter in the system. IBM suggests using this type of installation, which can be achieved by connecting the eth0 adapter directly to the Cluster VLAN.

Figure 1. CSM Remote Control Hardware and Networking Configuration for IBM xSeries 330 Nodes

View figure.

The following diagram (Figure 2) shows the relationship between the CSM node database attributes and the actual (internal) hardware names used in Figure 1. For remote power and remote console to work as expected, this matching of database attribute names to the internal hardware values must be correct for all RSAs, ISPs, and ESPs in the cluster.

Figure 2. CSM Remote Control Database Attributes for IBM xSeries 330 Nodes

View figure.

The following diagram (Figure 3) shows the hardware and networking configuration required for using CSM remote control with IBM xSeries 342 nodes. This diagram shows two ways of cabling the RSAs. They can each have their own connection to the Management VLAN (mgtn03.pok.ibm.com - mgtn12.pok.ibm.com), or a number of them can be daisy-chained from one RSA connected to the Management VLAN (mgtn13.pok.ibm.com). See Figure 4 for a more detailed view of a few nodes from Figure 3.

Figure 3. CSM Remote Control Hardware and Networking Configuration for IBM xSeries 342 Nodes

View figure.

The following diagram (Figure 4) is a more detailed view of a few nodes from Figure 3. The diagram shows the relationship between the CSM node database attributes and the actual (internal) hardware names used in Figure 3. For remote power and remote console to work as expected, this matching of database attribute values to the internal hardware names must be correct for all RSAs and ESPs in the cluster.

Figure 4. CSM Remote Control Database Attributes for IBM xSeries 342 Nodes

View figure.


Node Attributes Table

For planning purposes, it is helpful to fill out a table describing all of the nodes' attributes. In the following example, the cluster has 20 nodes. The attributes correspond to the hardware and networking configuration shown in Figure 1. The following page contains a blank template you can fill out.

Note:
The console port number (ConsolePortNum) is the physical port that the node's serial port is connected to in the console server hardware.

Table 2. Node Attributes Table: Example
Hostname HWControlPoint Power Method Svc ProcName ConsoleServerName Console Server Number Console Method Console PortNum HWType InstallMethod
clsn01.pok.ibm.com mgtn03.pok.ibm.com netfinity node01 mgtn02.pok.ibm.com 1 esp 0 netfinity csmonly
clsn02.pok.ibm.com mgtn03.pok.ibm.com netfinity node02 mgtn02.pok.ibm.com 1 esp 1 netfinity csmonly
clsn03.pok.ibm.com mgtn03.pok.ibm.com netfinity node03 mgtn02.pok.ibm.com 1 esp 2 netfinity csmonly
clsn04.pok.ibm.com mgtn03.pok.ibm.com netfinity node04 mgtn02.pok.ibm.com 1 esp 3 netfinity csmonly
clsn05.pok.ibm.com mgtn03.pok.ibm.com netfinity node05 mgtn02.pok.ibm.com 1 esp 4 netfinity csmonly
clsn06.pok.ibm.com mgtn03.pok.ibm.com netfinity node06 mgtn02.pok.ibm.com 1 esp 5 netfinity kickstart
clsn07.pok.ibm.com mgtn03.pok.ibm.com netfinity node07 mgtn02.pok.ibm.com 1 esp 6 netfinity kickstart
clsn08.pok.ibm.com mgtn03.pok.ibm.com netfinity node08 mgtn02.pok.ibm.com 1 esp 7 netfinity kickstart
clsn09.pok.ibm.com mgtn03.pok.ibm.com netfinity node09 mgtn02.pok.ibm.com 1 esp 8 netfinity kickstart
clsn10.pok.ibm.com mgtn03.pok.ibm.com netfinity node10 mgtn02.pok.ibm.com 1 esp 9 netfinity kickstart
clsn11.pok.ibm.com mgtn04.pok.ibm.com netfinity node01 mgtn02.pok.ibm.com 1 esp a netfinity csmonly
clsn12.pok.ibm.com mgtn04.pok.ibm.com netfinity node02 mgtn02.pok.ibm.com 1 esp b netfinity csmonly
clsn13.pok.ibm.com mgtn04.pok.ibm.com netfinity node03 mgtn02.pok.ibm.com 1 esp c netfinity csmonly
clsn14.pok.ibm.com mgtn04.pok.ibm.com netfinity node04 mgtn02.pok.ibm.com 1 esp d netfinity csmonly
clsn15.pok.ibm.com mgtn04.pok.ibm.com netfinity node05 mgtn02.pok.ibm.com 1 esp e netfinity csmonly
clsn16.pok.ibm.com mgtn04.pok.ibm.com netfinity node06 mgtn02.pok.ibm.com 1 esp f netfinity kickstart
clsn17.pok.ibm.com mgtn04.pok.ibm.com netfinity node07 mgtn03.pok.ibm.com 2 esp 0 netfinity kickstart
clsn18.pok.ibm.com mgtn04.pok.ibm.com netfinity node08 mgtn03.pok.ibm.com 2 esp 1 netfinity kickstart
clsn19.pok.ibm.com mgtn04.pok.ibm.com netfinity node09 mgtn03.pok.ibm.com 2 esp 2 netfinity kickstart
clsn20.pok.ibm.com mgtn04.pok.ibm.com netfinity node10 mgtn03.pok.ibm.com 2 esp 3 netfinity kickstart


Table 3. Node Attributes Table: Template
Hostname HWControlPoint Power Method Svc ProcName ConsoleServerName Console Server Number Console Method Console PortNum HWType InstallMethod



























































































































































































































































[ Top of Page | Previous Page | Next Page | Table of Contents | Index ]