Use this procedure to restore the system configuration in the
following situations:: only if the recover
procedure failed or if the data that is stored on the volumes is not required.
Before you begin
This configuration restore procedure is designed to restore
information about your configuration, such as volumes, storage
pools, and nodes. The data that you wrote to the volumes is not restored.
To restore the data on the volumes, you must restore application data
from any application that uses the volumes on the clustered system
as storage separately. Therefore, you must have a backup
of this data before you follow the configuration recovery process.
If USB encryption was enabled on the system when its configuration was backed up, then
at least 3 USB flash drives need to be present in the node canister
USB ports for the configuration restore to work. The 3 USB flash drives
must be inserted into the single node from which the configuration restore commands are run. Any USB
flash drives in other nodes (that might become part of the cluster) are ignored. If you are not
recovering a cloud backup configuration, the USB flash drives do not need to contain any keys. They
are for generation of new keys as part of the restore process. If you are recovering a cloud backup
configuration, the USB flash drives must contain the previous set of keys to allow the current
encrypted data to be unlocked and re-encrypted with the new keys.
About this task
You must regularly back up your configuration data and
your application data to avoid data loss. If a system is lost after
a severe failure occurs, both configuration for the system and application
data is lost. You must restore the system to the exact state it was
in before the failure, and then recover the application data.
During the restore process, the nodes and the storage enclosure are restored to the system, and
then the MDisks and the array are re-created and configured. If multiple storage enclosures are
involved, the arrays and MDisks are restored on the proper enclosures based on the enclosure
IDs.
Important: - For Storwize® V3700 systems that contain nodes that are
attached to external controllers virtualized by iSCSI, all nodes must be added into the system
before you restore your data. Additionally, the system cfgportip settings and
iSCSI storage ports must be manually reapplied before you restore your data. See step
10.
- For VMware vSphere Virtual
Volumes (sometimes
referred to as VVols) environments, after a T4 restoration, some of the Virtual Volumes configuration steps are
already completed: metadatavdisk created, usergroup and user created, adminlun hosts created.
However, the user must then complete the last two configuration steps manually (creating a storage
container on IBM® Spectrum Control Base
Edition and creating virtual
machines on VMware
vCenter). See Configuring Virtual Volumes.
If you do not understand the instructions to run the
CLI commands, see the command-line interface reference information.
To
restore your configuration data, follow these steps:
Procedure
- Verify that all nodes are available as candidate nodes
before you run this recovery procedure. You must remove errors 550
or 578 to put the node in candidate state.
- Use the initialization tool that
is available on the USB flash drive initialize
the system with the IP address.
- Upload
an SSH public key file to allow ssh access to the system CLI. You
use the default superuser password (passw0rd).
Use the following CLI command on your desktop:
pscp -pw passw0rd ssh_public_key_file superuser@cluster_ip:/tmp/
Where cluster_ip is
the IP address or DNS name of the system for which you want to restore
the configuration.
- Using the
command-line interface, issue the following command to log on to the
system:
plink -pw passw0rd superuser@cluster_ip
- Issue the
following CLI to configure the public SSH key for the superuser:
chuser -keyfile /tmp/ssh_public_key_file superuser
You
can now use your private SSH key file instead of the default superuser password to connect to the
system using SSH.Note: Because the RSA host key is changed, a warning message might display when you
next connect to the system using SSH.
- By default, the newly initialized system is created in the storage layer. The layer of the
system is not restored automatically from the configuration backup XML file. If the system you are
restoring was previously configured in the replication layer, you must change the layer manually
now. For more information about the replication layer and
storage layer, see the System layers topic in the Related concepts section at the end
of the page.
- If the clustered system was previously configured as replication layer, then use the
chsystem command to change the layer setting.
- Identify the configuration backup file
from which you want to restore. The file can be either
a local copy of the configuration backup XML file that you saved when
you backed-up the configuration or an up-to-date file on one of the
nodes. Configuration data is automatically backed up daily at 01:00
system time on the configuration node.
Attention: You must
copy the backup file to another computer before you continue. First
issue the following CLI command to determine the panel names of the
nodes: sainfo lsservicenodes
To save a copy
of the data, complete the following steps to check for backup files
on both nodes:- List the files in the /dumps directory
on the node with:
sainfo lsfiles panel_name
Where panel_name is
the node panel name.
- Find the file name that begins with svc.config.cron.xml and
ends with the panel name.
- Copy the file to the configuration node with:
satask cpfiles -prefix /dumps/filename -source panel_name
Where filename is
the file name. The copy completes with: sainfo
lscmdstatus. The displayed cpfiles_status changes
from: cpfiles_status Active
cpfiles_status_data Copying 1 of 1
to: cpfiles_status complete
cpfiles_status_data Copied 1 of 1
- Download the file to your computer. From
your desktop, issue the following command:
pscp -i ssh_private_key_file superuser@cluster_ip:/dumps/filename full_path_to_desktop_copy_location
Where filename is
the file name that begins with svc.config.cron.xml on
the system, and full_path_to_desktop_copy_location is
the location on your desktop to copy this file to.
- Copy onto the system the XML
backup file from which you want to restore.
pscp full_path_to_identified_svc.config.file
superuser@cluster_ip:/tmp/svc.config.backup.xml
- If the system contains any iSCSI storage controllers, these controllers must be detected
manually now. The nodes that are connected to these controllers, the iSCSI port IP addresses, and
the iSCSI storage ports must be added to the system before you restore your data.
- To add these nodes, determine the panel name, node name, and I/O groups of any such nodes from
the configuration backup file. To add the nodes to the system, run the following command:
svctask addnode -panelname panel_name -iogrp iogrp_name_or_id -name node_name
Where panel_name is the name that is
displayed on the panel, iogrp_name_or_id is the name or ID of the I/O group to
which you want to add this node, and node_name is the name of the node.
- To restore iSCSI port IP addresses, use the cfgportip command.
- To restore IPv4 address, determine id (port_id), node_id, node_name, IP_address, mask, gateway,
host (0/1 stands for no/yes), remote_copy (0/1 stands for no/yes), and storage (0/1 stands for
no/yes) from the configuration backup file, run the following command:
svctask cfgportip -node node_name_or_id -ip ipv4_address -gw ipv4_gw
-host yes | no -remotecopy yes | no -storage yes | no port_id
Where node_name_or_id is the name or id of the node,
ipv4_address is the IP v4 version protocol address of the port, and
ipv4_gw is the IPv4 gateway address for the port.
- To restore IPv6 address, determine id (port_id), node_id, node_name, IP_address_6, mask,
gateway_6, prefix_6, host_6 (0/1 stands for no/yes), remote_copy_6 (0/1 stands for no/yes), and
storage_6 (0/1 stands for no/yes) from the configuration backup file, run the following command:
svctask cfgportip -node node_name_or_id -ip_6 ipv6_address -gw_6 ipv6_gw
-prefix_6 prefix -host_6 yes | no -remotecopy_6 yes | no -storage_6 yes | no port_id
Where node_name_or_id is the name or id of the node,
ipv6_address is the IP v6 version protocol address of the port,
ipv6_gw is the IPv6 gateway address for the port, and prefix
is the IPv6 prefix.
Complete steps b.i and b.ii for all (earlier configured) IP ports in the
node_ethernet_portip_ip sections from the backup configuration file.
- Next, detect and add the iSCSI storage port candidates by using the
detectiscsistorageportcandidate and addiscsistorageport
commands. Make sure that you detect the iSCSI storage ports and add these ports in the same order as
you see them in the configuration backup file. If you do not follow the correct order, it might
result in a T4 failure. Step c.i must be followed by steps c.ii and c.iii. You must repeat these
steps for all the iSCSI sessions that are listed in the backup configuration file exactly in the
same order.
- To detect iSCSI storage ports, determine src_port_id, IO_group_id (optional, not required if the
value is 255), target_ipv4/target_ipv6 (the target ip that is not blank is required),
iscsi_user_name (not required if blank), iscsi_chap_secret (not required if blank), and site (not
required if blank) from the configuration backup file, run the following command:
svctask detectiscsistorageportcandidate -srcportid src_port_id -iogrp IO_group_id
-targetip/targetip6 target_ipv4/target_ipv6 -username iscsi_user_name -chapsecret iscsi_chap_secret -site site_id_or_name
Where src_port_id is the source Ethernet port ID of the configured port,
IO_group_id is the I/O group ID or name being detected,
target_ipv4/target_ipv6 is the IPv4/IPv6 target iSCSI controller IPv4/IPv6
address, iscsi_user_name is the target controller user name being detected,
iscsi_chap_secret is the target controller chap secret being detected, and
site_id_or_name is the specified id or name of the site being detected.
- Match the discovered target_iscsiname with the
target_iscsiname for this particular session in the backup configuration file by
running the lsiscsistorageportcandidate command, and use the matching index to
add iSCSI storage ports in step c.iii.
Run the svcinfo
lsiscsistorageportcandidate command and determine the id field of the row whose
target_iscsiname matches with the target_iscsiname from the
configuration backup file. This is your candidate_id to be used in step
c.iii.
- To add the iSCSI storage port, determine IO_group_id (optional, not required if the value is
255), site (not required if blank), iscsi_user_name (not required if blank in backup file), and
iscsi_chap_secret (not required if blank) from the configuration backup file, provide the
target_iscsiname_index matched in step c.ii, and then run the following command:
addiscsistorageport -iogrp iogrp_id -username iscsi_user_name -chapsecret iscsi_chap_secret -site site_id_or_name candidate_id
Where iogrp_id is the I/O group ID or name that is added,
iscsi_user_name is the target controller user name being added,
iscsi_chap_secret is the target controller chap secret being added, and
site_id_or_name specified the id or name of the site being added.
- If the configuration is a HyperSwap® or stretched cluster, the controller name and site needs to be restored. To restore the
controller name and site, determine controller_name and controller site_id from the backup xml file
by matching the inter_WWPN field with the newly added iSCSI controller, and then run the following
command:
chcontroller -name controller_name -site site_id/name controller_id/name
Where
controller_name is the name of the controller from the backup xml file,
site_id/name is the id/name of the site of iSCSI controller from the backup xml
file, and controller_id/name is the id or current name of the
controller.
- Issue the following CLI command to compare the current
configuration with the backup configuration data file:
svcconfig restore -prepare
This
CLI command creates a log file in the /tmp directory of the
configuration node. The name of the log file is
svc.config.restore.prepare.log.Note: It
can take up to a minute for each 256-MDisk batch to be discovered. If you receive error message
CMMVC6200W for an MDisk after you enter this command, all the managed
disks (MDisks) might not be discovered yet. Allow a suitable time to elapse and try the
svcconfig restore -prepare command again.
- Issue the following command to copy the log file to another
server that is accessible to the system:
pscp superuser@cluster_ip:/tmp/svc.config.restore.prepare.log
full_path_for_where_to_copy_log_files
- Open the log file from the server where the copy is now
stored.
- Check the log file for errors.
- If you find errors, correct the condition that caused the
errors and reissue the command. You must correct all errors before
you can proceed to step 15.
- If you need assistance, contact the IBM
Support Center.
- Issue the following CLI command to
restore the configuration:
svcconfig restore -execute
This
CLI command creates a log file in the /tmp directory
of the configuration node. The name of the log file is svc.config.restore.execute.log.
- Issue the following command to copy the log file to another
server that is accessible to the system:
pscp superuser@cluster_ip:/tmp/svc.config.restore.execute.log
full_path_for_where_to_copy_log_files
- Open the log file from the server where the copy is now
stored.
- Check the log file to ensure that no errors or warnings
occurred.
Note: You might receive a warning that states
that a licensed feature is not enabled. This message means that
after the recovery process, the current license settings do not match
the previous license settings. The recovery process continues
normally and you can enter the correct license settings in the management GUI later.
When you log in to the CLI again over SSH,
you see this output:
IBM_Storwize:your_cluster_name:superuser>
What to do next
You can remove any unwanted configuration backup and restore files from the
/tmp directory on your configuration by issuing the following CLI
command:
svcconfig clear -all