IBM System Storage SAN Volume Controller ++++++++++++++++++++++++++++++++++++++++ ------------------------------------------------------------------------------- CONTENTS 1. Introduction 2. Available Education 3. Pre-Requisites 4. Code Levels 5. Problems Resolved and New Features 6. Installation Instructions for New Clusters 7. Further Documentation 8. Known Issues and Restrictions in this Level 9. Maximum Configurations 10. Supported Hardware ------------------------------------------------------------------------------- 1. Introduction This document describes how to install version 5.1.0.11 of the IBM System Storage SAN Volume Controller (2145) software. This release of software is a new release and delivers new features in addition to resolving APARs as detailed in Section 5. Please refer to the Recommended Software List and Supported Hardware List on the support website: http://www.ibm.com/storage/support/2145 ------------------------------------------------------------------------------- 2. Available Education A course on SAN Volume Controller Planning and Implementation is available. For further information or enrolment, contact your local IBM representative. Visit the support website for online learning materials and tutorials, IBM Redbooks and training information. ------------------------------------------------------------------------------- 3. Pre-Requisites SVC Cluster and Console Upgrade Information Before installing this code level please check the following pre-requisites are met. Also please note the concurrent upgrade (upgrade with I/O running) restrictions. Performance of the cluster will be degraded during the software upgrade process because nodes are taken offline to perform the upgrade and the write cache is flushed when each node is restarted. IBM recommends you perform the upgrade at a time of lower overall I/O activity and when you do not expect significant spikes in the system workload. If you are performing a concurrent code upgrade, you must first ensure that all host paths are available and operating correctly. Check there are no unfixed errors in the error log or on the front panel. Use normal service procedures to resolve these errors before proceeding. Before upgrading the SVC cluster, we recommend using the SVC Software Upgrade test utility "svcupgradetest". Refer to the following URL for further information: http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585 If you are upgrading from a code level below 5.1.0.0, it is important that you upgrade the SVC Console (GUI) before you install the new SVC code. Any TPC installations should also be upgraded to a supported level prior to upgrading the SVC Console and cluster. The SVC Console (GUI) 5.1.0.xxx is not fully compatible with SVC versions earlier than 5.1.0.0. Therefore the new SVC Console should only be used with SVC clusters running previous versions of SVC code in order to upgrade to SVC 5.1.0. Please see the separate release note for the SVC Console for information on installing and using the SVC Console software. For existing clusters, please check which level your cluster is currently running before proceeding. Refer to the Software Installation and Configuration Guide (section: "Using the SAN Volume Controller Console"), section "Viewing Cluster Properties", open the 'General' panel of the cluster properties view. The SVC software version is labelled 'Licensed Code Version'. If your cluster is at 3.1.0.5 or higher then please follow the upgrade instructions given in the Software Installation and Configuration Guide (section: "Upgrading the SAN Volume Controller software"), If you are running a version of SVC software older than 4.3.1.0 then you will have to perform multiple upgrades to install SVC 5.1.0.11 on your cluster. Please see the following web page for the full upgrade compatibility matrix. http://www.ibm.com/storage/support/software/sanvc/code If you are installing a new SVC Cluster then you will need to follow the procedure in section 6. This will upgrade the SVC cluster to version 5.1.0.11 before the cluster is configured for use. Please note the warning at the beginning of the process and only proceed if you are sure there is no data on this cluster. ------------------------------------------------------------------------------- 4. Code Levels 2145 Software: 2145 Release Level 5.1.0.11 (18.6.1107290000) ------------------------------------------------------------------------------- 5. Problems Resolved and New Features New service features in SVC 5.1.0.11 * Increase the available internode bandwidth for Global Mirror New service features in SVC 5.1.0.10 * Improved first time debug data capture for node 578 events * Improved node HDD access protocols to reduce the risk of HDD failures New features in SVC 5.1.0.8: * Add support for IBM Storwize V7000 New features in SVC 5.1.0.6: * Dynamically changeable Global Mirror Impact Threshold value * New minimum Global Mirror link tolerance value of 20 seconds * Add support for iSCSI-attached VMWare hosts New features in SVC 5.1.0.5: * MDisk Balancing - Attempt to distribute MDisks of an MDisk Group evenly across available paths. New features in SVC 5.1.0.3: * Add support for the SUN Storage 6180,6580 & 6780 Storage Arrays * Add support for the Nexsan SATABeast Storage Array New service features in SVC 5.1.0.2 * 2145-4F2 to 2145-CF8 migration procedure New features in SVC 5.1.0.1: * SAN Volume Controller 2145-CF8 nodes * Solid-state drives (SSDs) on SAN Volume Controller 2145-CF8 nodes * Zero-detect for host I/O on SAN Volume Controller 2145-CF8 nodes * Long wave SFPs in SAN Volume Controller 2145-CF8 nodes * iSCSI-attached hosts * Remote authentication for the users of SAN Volume Controller clusters * syslog servers to receive error, warning, and informational notifications from the cluster * Multiple email servers for e-mail notifications and inventory reporting * Second Ethernet port for clusters * Multiple cluster partnerships * 8192 Metro/Global Mirror relationships * Reverse FlashCopy mappings with multiple targets * Viewing quorum disks and setting an active quorum disk * Zero-detect for mirroring space-efficient virtual disk copies * 256TB virtual disks * 8Gbps Fibre-Channel link speeds on SAN Volume Controller 2145-CF8 nodes New service features in SVC 5.1.0.1: * Automatic recovery of nodes which have lost node metadata (Node Error 578) APARs resolved in this release (5.1.0.11): High Importance Fixes IC72925 Multiple node asserts when fibre channel switch nameserver query returns multiple instances of the same port ID IC73450 Incorrect check condition sent to hosts following software upgrade IC74826 Node asserts and 578 reboot events IC76855 Error recovery procedure for HDS storage controllers can lead to offline mdisks IC77760 Error recovery procedure for EMC Clariion storage controllers can lead to offline mdisks Suggested Fixes IC71045 Incorrect cluster partnership bandwidth parameter displayed IC72606 Node assert following fibre channel fabric link down event IC76413 Do not log invalid 1097 PSU errors APARs resolved in previous 5.1.x releases: 5.1.0.10 Critical Fixes IC68538 Multiple node asserts when moving a vdisk between IO groups while I/O is being performed to the vdisk IC70968 Multiple node asserts during upgrade to 5.1.0.x code IC72398 Multiple node asserts following split stopping for a flash copy map IC73042 Repeated node asserts during upgrade to 5.1.0.x code IC73593 Multiple node asserts I/O failure occurs on a flash copy mapped vdisk Suggested Fixes IC69644 Node assert when node assumes config node role IC69835 Missing field entries in SNMP traps IC74954 Incorrect serial number in call home record 5.1.0.9 Critical Fixes IC74194 Node HDD failure can cause host I/O access problems when using the SDDPCM multipathing driver. Please refer to the following flash for more details: http://www.ibm.com/support/docview.wss?uid=ssg1S1003757 5.1.0.8 Critical Fixes IC68873 Multiple node asserts following sequential image mode vdisk migrations IC72825 Loss of access/data error on space efficient vdisks with used capacity greater than 2TB High Importance Fixes IC67362 Node asserts during space efficient vdisk auto-expand operation IC67786 Node asserts when flash copy target vdisks have reached full capacity IC69947 Node assert when mdisk group is taken offline IC70105 Multiple node asserts due to inter-IO group flash copy mapping issue IC71369 Node ports do not automatically recover from Link Reset events IC71425 Node assert and CLI hang when flash copy map is force stopped while completion is marked as 100% IC72959 False 231 errors during 2145-CF8 node boot Suggested Fixes IC67573 Node assert when removing mdisk while licensed virtualization limit has been exceeded IC68478 NTP service hang event downgraded from error to warning IC70022 I/O queue depth change for IBM XIV storage controllers IC70541 Vdisks with syncrate set to 0 may not be listed in lsvdisksyncprogress output IC71978 I/O queue depth change for EMC VMAX storage controllers IC72367 False 1203 duplicate frame detected errors logged 5.1.0.7 IC71042 Improved 2145-CF8 node BIOS upgrade procedure IC71044 Improved detection of PSU hardware errors on 2145-CF8 nodes 5.1.0.6 High Importance Fixes IC64938 Node assert when 128 mdisk deletion operations ongoing at the same time IC67292 Node assert when entered password exceeds maximum password character length IC65295 Node assert caused by quorum disk interaction issue Suggested Fixes IC68431 Master vdisk name in remote copy relationship blank when viewed from auxiliary cluster 5.1.0.5 Critical Fixes IC68891 231 boot errors on CF8 nodes 5.1.0.4 High Importance Fixes IC65667 Node asserts when upgrading from 4.3.1 to 5.1.0 with incremental flash copy maps in progress IC66175 Node assert caused by remote copy resource allocation issue IC66869 Degraded Global Mirror performance caused by internal resource allocation issue IC66903 Degraded performance caused by cache partition destage issue IC67331 Node assert caused by kernel panic IC67469 Node asserts caused by multiple space efficient vdisk shrink commands while a vdisk is undergoing an autoexpand operation IC68234 Incorrect 8A4 node hardware shutdown temperature Suggested Fixes IC66008 Incorrect FRU number for 8G4 fan 5.1.0.3 NONE 5.1.0.2 NONE 5.1.0.1 High Importance Fixes IC57670 Node assert when processing large numbers of SCSI aborts IC58599 Node asserts caused by configuring a host HBA with SVC WWPNs IC58731 Node asserts due to kernel issue IC59007 Node assert caused by incorrect LUN reset request IC59149 Node asserts caused by logins from devices with duplicate WWNNs IC61273 1220 errors when creating new HP EVA mdisks IC61716 Node asserts caused by cluster slowdown while space efficient vdisk is auto-expanding IC62128 Node assert caused by multiple node port disconnections IC62135 Mdisk offline following path failover failure IC63013 Node assert due to delayed remote copy I/O IC63235 Node assert during concurrent code upgrade IC63911 Node asserts caused by starting a flash copy map to a space efficient vdisk of zero size Suggested Fixes IC59392 Remove performance penalty when doing I/O to non-preferred node IC60313 1203 errors caused by incorrect host abort handling IC62448 Hung flash copy map following node warmstarts ------------------------------------------------------------------------------- 6. Installation Instructions For New Clusters ******************************************************************************* IMPORTANT: This procedure will destroy any existing data or configuration If you wish to preserve the current SVC configuration and all data virtualized by the cluster then please refer to the cluster upgrade instructions in the Software Installation and Configuration Guide. ******************************************************************************* * THIS PROCEDURE IS FOR NEW INSTALLATIONS ONLY * Follow the instructions in the "Creating a SAN Volume Controller cluster" chapter of the Software Installation and Configuration Guide to create a cluster on ONLY the first node. Do not add the other nodes at this point. * Follow the instructions for "Viewing Cluster Properties" in the "Using the SAN Volume Controller Console" section and open the 'General' panel of the cluster properties. The SVC software version is labelled 'Licensed Code Version'. If the code version of the cluster is 5.1.0.11 (18.6.1107290000) then you do not need to perform any further actions. Continue to add nodes and configure your cluster as described in the Software Installation and Configuration Guide. Otherwise, continue to follow this procedure. * If you are not using DHCP, ensure that you have set a valid unique IP address for service mode on the existing single node cluster. * Delete the cluster from the SVC Console (GUI) by selecting the cluster from the Cluster panel, choosing 'Remove a Cluster' from the drop down box, and clicking 'Go'. * Put the node in the existing single node cluster into service mode by following the steps outlined below. This procedure is described in the "Using the front panel of the SAN Volume Controller" section, under "Recover cluster navigation" -> "Setting service mode". 1. On the front panel, navigate to the main 'Cluster:' display and then left to the 'Recover Cluster?' menu. 2. Press select. The screen should now say 'Service Access'. 3. Press and hold the down button. 4. Press and release the select button. 5. Release the down button. The node restarts and service mode is enabled. * Apply the upgrade package: 1. Open a web browser and point it the following web address, where service_ip_address is the IPv4 or IPv6 address that is down on the front panel display of the node: https://service_ip_address 2. Enter the 'admin' user ID and password that was configured when you set up the one node cluster. 3. Click "Upgrade Software" on the left side of the web page. 4. Click the "Upload" button and upload the IBM2145_INSTALL_5.1.0.11 file. 5. Once the upload completes, press the "Continue" button. This will take you to a page with a list of available upgrade packages. 6. Select the file you just uploaded from the list of available software upgrade files and check the "Skip prerequisite checking" box. Click the "Apply" button. 7. Accept any warnings, and click the "Confirm" button. 8. The node will now reboot and apply the new software. Note: An SVC dump file may be generated during this upgrade. This is expected and can be ignored. * Once upgraded, create a new cluster on the upgraded node, following the "Creating a SAN Volume Controller cluster" chapter of the Software Installation and Configuration Guide. At this point you will have a new one node cluster running 5.1.0.11 code. Note: After re-adding the new cluster to the SVC Console (GUI), an 'Invalid SSH Fingerprint" message may be displayed. Select the 'Reset the SSH Fingerprint' option from the drop down box, then click 'Go' to resolve this issue. * After this process is complete, check that the software version number is 5.1.0.11 (18.6.1107290000) * You can now add the other nodes to the cluster and they will automatically be upgraded/downgraded as required. ------------------------------------------------------------------------------- 7. Further Documentation The latest version of all SAN Volume Controller documentation can be downloaded in PDF format from the support website: http://www.ibm.com/storage/support/2145 ------------------------------------------------------------------------------- 8. Known Issues and Restrictions in this Level Support for the legacy per-cluster (v_stats and m_stats) performance statistics has been removed. These legacy statistics files have been superseded by the per-node Nv_stats, Nm_stats and Nn_stats files. Refer to the following Technote for more information on these enhanced cluster performance statistics: http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003597 VDisks larger than 2TB are not currently supported for use in FlashCopy mappings. Refer to the following flash for more information on this restriction: http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003598 iSCSI host ports are incorrectly listed as offline, with zero logins to SVC nodes. Refer to the following Technote for more information on this issue: http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003599 Please read the following flashes before upgrading: Potential Loss of Access and Data Error When Performing I/O to Thin Provisioned (Space Efficient) Volumes (VDisks) With Used Capacity Greater Than 2TB http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003734 Offline or Degraded Disks May Result in Loss of I/O Access During Code Upgrade http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1002971 If an SVC Code Upgrade Stalls or Fails then Contact IBM Support for Further Assistance http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1002894 2145-CF8 Nodes May Repeatedly Loop Between Boot Codes 100 and 137 When Upgrading to SVC V5.1.0.4 or Later http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003682 ------------------------------------------------------------------------------- 9. Maximum Configurations The maximum configurations for SVC 5.1.0.11 are documented in the SVC V5.1.x Restrictions document which is available from: http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003555 ------------------------------------------------------------------------------- 10. Supported Hardware This release of SAN Volume Controller software is supported on the following node hardware types: * 2145-CF8 * 2145-8G4 * 2145-8A4 * 2145-8F4 * 2145-8F2 Please note: the 2145-4F2 node hardware type is not supported with this release -------------------------------------------------------------------------------