This section tells you how you can determine whether the installation was successful. It also gives you some suggestions on how to get started using Cluster Systems Management. After installation is successfully completed, remote RMC and CSM commands are enabled. To verify that the installation was successful, follow the directions in the sections below.
To verify that dsh is working on all of the nodes, run the dsh command with the date option, as follows:
dsh -a date
A list of nodes with the date on each node is returned.
To see the status of all the nodes, you can use the monitorinstall , lsnode, or nodegrp command. To use the monitorinstall command, type:
monitorinstall
The monitorinstall command writes output similar to this:
Node Status ----------------------------------------------- clsn02.ppd.pok.ibm.com Installed clsn03.ppd.pok.ibm.com Installed clsn04.ppd.pok.ibm.com Installed clsn05.ppd.pok.ibm.com Installed clsn06.ppd.pok.ibm.com Installed clsn07.ppd.pok.ibm.com Installed clsn08.ppd.pok.ibm.com Not Installed
All nodes should be listed as Installed.
To see the installation status of all the nodes in your cluster, using the lsnode command, type:
lsnode -a Mode
The result shows you the Mode of each node. The mode for all the nodes should be Managed.
To see the installation status of a group of nodes, using the nodegrp command, type:
nodegrp ManagedNodes
The command displays a list of all the nodes in the cluster that are considered managed (defined and installed).
To verify that RMC is working, use the lsnode command, as follows:
lsnode -H
This command retrieves information about the attributes from each node.
To verify the power status of the nodes (whether they are on or off), type:
rpower -a query
A list of nodes with their associated power state is returned.
To verify whether the nodes are reachable, type:
lsnode -p
The ping status of the nodes is returned.
CSM provides a set of predefined conditions, responses, and dynamic node groups. To see a list of the predefined conditions, use the RSCT lscondition command, as follows:
export CT_MANAGEMENT_SCOPE=1 lscondition
To see a list of the predefined responses, use the RSCT lscondresp command, as follows:
export CT_MANAGEMENT_SCOPE=1 lscondresp
To see a list of the predefined dynamic node groups use the nodegrp command, as follows:
nodegrp
To begin working with the Configuration File Manager, use the following example. The example sets up the cfmupdatenode command to run whenever the /cfmroot/tmp/myfile file has changed, distributing the changed file across all nodes in the cluster.
export CT_MANAGEMENT_SCOPE=1 startcondresp "CFMRootModTimeChanged" "CFMModResp" mkdir /cfmroot/tmp touch/tmp/myfile cp /tmp/myfile /cfmroot/tmp/myfile
For more information on the Configuration File Manager, see CSM for Linux: Administration Guide.
To try out monitoring, use the following example.
export CT_MANAGEMENT_SCOPE=1 startcondresp NodeReachability BroadcastEventsAnyTime
nodegrp -a c5bn07,c5bn08,c5bn09,c5bn10,c5bn11 servers
nodegrp -a c5bn12,c5bn13 admin
nodegrp
The output is similar to:
admin servers
nodegrp servers
The output is:
c5bn07.ppd.pok.ibm.com c5bn08.ppd.pok.ibm.com c5bn09.ppd.pok.ibm.com c5bn10.ppd.pok.ibm.com c5bn11.ppd.pok.ibm.com
dsh -N servers vmstat | dshbak
The output is similar to:
HOST: c5bn08.ppd.pok.ibm.com ---------------------------- procs memory swap io system cpu r b w swpd free buff cache si so bi bo in cs us sy id 0 4 1 442440 192576 56292 635808 0 0 0 0 1 1 0 0 0 HOST: c5bn09.ppd.pok.ibm.com ---------------------------- procs memory swap io system cpu r b w swpd free buff cache si so bi bo in cs us sy id 0 4 1 423692 214232 56240 615396 0 0 0 0 1 1 0 0 0 HOST: c5bn10.ppd.pok.ibm.com ---------------------------- procs memory swap io system cpu r b w swpd free buff cache si so bi bo in cs us sy id 0 4 1 405904 162404 56248 604424 0 0 0 0 4 1 0 0 1 HOST: c5bn11.ppd.pok.ibm.com ---------------------------- procs memory swap io system cpu r b w swpd free buff cache si so bi bo in cs us sy id 0 4 1 443564 135240 56212 636256 0 0 0 0 4 1 0 0 1