IBM System Storage SAN Volume Controller ++++++++++++++++++++++++++++++++++++++++ ------------------------------------------------------------------------------- CONTENTS 1. Introduction 2. Available Education 3. Pre-Requisites 4. Code Levels 5. Problems Resolved and New Features 6. Installation Instructions for New Clusters 7. Further Documentation 8. Known Issues and Restrictions in this Level 9. Maximum Configurations 10. Licensing Information ------------------------------------------------------------------------------- 1. Introduction This document describes how to install version 4.3.1.8 of the IBM System Storage SAN Volume Controller (2145) software. This release of software is a service release, it addresses APARs as detailed in Section 5. Please refer to the Recommended Software List and Supported Hardware List on the support website: http://www.ibm.com/storage/support/2145 ------------------------------------------------------------------------------- 2. Available Education A course on SAN Volume Controller Planning and Implementation is available. For further information or enrolment, contact your local IBM representative. Visit the support website for online learning materials and tutorials, IBM Redbooks and training information. ------------------------------------------------------------------------------- 3. Pre-Requisites SVC Cluster and Console Upgrade Information NOTE: The SVC licensing configuration and calculation changed in SVC V4.3.0 If you are upgrading from a code level earlier than V4.3.0.0 then refer to section 10 for more information. Before installing this code level please check the following pre-requisites are met. Also please note the concurrent upgrade (upgrade with I/O running) restrictions. Performance of the cluster will be degraded during the software upgrade process because nodes are taken offline to perform the upgrade and the write cache is flushed when each node is restarted. IBM recommends you perform the upgrade at a time of lower overall I/O activity and when you do not expect significant spikes in the system workload. If you are performing a concurrent code upgrade, you must first ensure that all host paths are available and operating correctly. Check there are no unfixed errors in the error log or on the front panel. Use normal service procedures to resolve these errors before proceeding. Before upgrading the SVC cluster, we recommend using the SVC Software Upgrade test utility "svcupgradetest". Refer to the following URL for further information: http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585 If you are upgrading from a code level below 4.3.1.0, it is important that you upgrade the SVC Console (GUI) before you install the new SVC code. Any TPC installations should also be upgraded to a supported level prior to upgrading the SVC Console and cluster. The SVC Console (GUI) 4.3.1.xxx is not fully compatible with SVC versions earlier than 4.3.1.0. Therefore the new SVC Console should only be used with SVC clusters running previous versions of SVC code in order to upgrade to SVC 4.3.1. Please see the separate release note for the SVC Console for information on installing and using the SVC Console software. For existing clusters, please check which level your cluster is currently running before proceeding. Refer to the Software Installation and Configuration Guide (section: "Using the SAN Volume Controller Console"), section "Viewing Cluster Properties", open the 'General' panel of the cluster properties view. The SVC software version is labelled 'Licensed Code Version'. If your cluster is at 3.1.0.5 or higher then please follow the upgrade instructions given in the Software Installation and Configuration Guide (section: "Upgrading the SAN Volume Controller software"), If you are running a version of SVC software older than 3.1.0.5 then you will have to perform multiple upgrades to install SVC 4.3.1.8 on your cluster. Please see the following web page for the full upgrade compatibility matrix. http://www.ibm.com/storage/support/software/sanvc/code If you are installing a new SVC Cluster then you will need to follow the procedure in section 6. This will upgrade the SVC cluster to version 4.3.1.8 before the cluster is configured for use. Please note the warning at the beginning of the process and only proceed if you are sure there is no data on this cluster. ------------------------------------------------------------------------------- 4. Code Levels 2145 Software: 2145 Release Level 4.3.1.8 (9.18.0909080000) ------------------------------------------------------------------------------- 5. Problems Resolved and New Features New features in SVC 4.3.1.7: * Support for Texas Memory Systems RamSan 500 Controller New features in SVC 4.3.1.3: * Japanese Language Support for V4.3.1 * Support for Xiotech Emprise 5000 storage system. New features in SVC 4.3.1.0: * Support for SVC Entry Edition - 2145-8A4 Storage Engine - Capacity-based licensing * Space-Efficient virtual disks can be used for Metro/Global Mirror targets * Network Time Protocol (NTP) support for cluster time and date synchronisation from an external source * Embedded CIMOM * Enhanced cluster performance statistics for troubleshooting * Improved interoperability: refer to the support website for the latest supported hardware and software information New features in SVC 4.3.0.0: * Space-Efficient Virtual disks, which use disk space only when data is written instead of reserving disk space for the entire capacity of a virtual disk * Space-Efficient FlashCopy, which uses disk space only for changes between the source and target data and not for the entire capacity of a virtual disk copy * Virtual Disk Mirroring, which helps improve availability for critical applications by storing two copies of a virtual disk on different disk systems * Support for IPv6 * Increased scalability: support for up to 8192 virtual disks and 256 FlashCopy targets per source vdisk * Improved interoperability: refer to the support website for the latest supported hardware and software information New service features in SVC 4.3.1.0: * Simplified VDisk recovery process after data loss - svctask recovervdisk - svctask recovervdiskbyiogrp - svctask recovervdiskbycluster New service features in SVC 4.3.0.0: * Error code 1625 will no longer generate a call home to IBM. A new Cluster Error 1624 has been introduced to call home in the event of a persistent storage configuration problem. APARs resolved in this release (4.3.1.8): High Importance Fixes IC61426 Shutdown temperature of 8A4 nodes set too high. Suggested Fixes IC62732 Cluster error 1001 triggered when taking a livedump. IC62728 Node assert in Remote Copy during recovery operations. APARs resolved in previous 4.3.x releases: 4.3.1.7 High Importance Fixes IC60429 Node assert caused by connectivity issues between remote copy clusters IC60820 Node assert caused by large number of host abort events IC61057 Node Error 578 caused by UPS battery test on a failing battery IC59911 Node assert when receiving a host Clear Task Set (TMF) Suggested Fixes IC60461 Node statistics (Nn_stats) lines truncated to 256 characters IC60779 Node assert when using FlashCopy, and both nodes in an IO group have previously been missing from the cluster at the same time 4.3.1.6: High Importance Fixes IC60952 Node asserts in remote cluster caused by connectivity issues on the inter-cluster link when running Global Mirror IC60803 False memory errors on 2145-8F2 and 2145-8F4 hardware 4.3.1.5: Critical Fixes IC60186 Unexpected assert when modifying RC relationship names. High Importance Fixes IC59392 Remove performance penalty when doing I/O to non-preferred node IC60315 Increase queue depth for HP StorageWorks XP24000 and Hitachi TagmaStore 4.3.1.4: Critical Fixes IC60084 Degraded performance caused by embedded CIMOM process consuming excessive CPU resource IC60083 Incorrect 8G4 node hardware shutdown temperature High Importance Fixes IC56524 Unexpected lease expiries caused by fabric disruption IC59013 Node assert caused by inconsistent vendor inquiry data received from controller IC58863 Multiple node asserts caused by large numbers of login/logout events from remote cluster IC58962 Node assert caused by mismatched SCSI and FCP transfer lengths command from host IC59814 Mdisks not recovered following controller outage IC59795 Orphaned rbash process consuming excessive CPU resources Suggested Fixes IC58610 Insufficient Queue Depth on HSV210 controller IC59072 SNMP Traps not being received. 4.3.1.3: NONE 4.3.1.2: Critical Fixes IC59247 Node asserts during upgrade from 4.1.x to 4.3.1 This release of software also addresses a potential issue during upgrade to 4.3.1 when running Global Mirror. 4.3.1.1: Critical Fixes IC56770 Node Error 570/578 after over-temperature condition IC59007 Node asserts handling spurious underrun responses from storage systems for SCSI LUN reset commands High Importance Fixes IC54591 Node assert during Remote Copy error recovery IC57100 Node assert handling SCSI reserve/release commands IC57244 Node assert caused by timing window updating FlashCopy difference count IC57976 Node Error 578 following a Node Error 51x memory error IC58110 Node Error 564 caused by incorrect error handling during software upgrade/downgrade IC58195 New configuration node inaccessible via ethernet after configuration node failover IC58255 Node assert handling corrupted Fibre Channel frames Suggested Fixes IC49220 Cache state not checked before removing a node from the cluster IC53049 Degraded performance when starting large numbers of Global Mirror relationships concurrently IC53093 Migrations should not be started if source or destination is offline IC57339 Spurious Cluster Error 1910 (FlashCopy mapping stopped) when modifying FlashCopy mappings IC58272 Cluster Error 1146 logged for UPS error condition instead of Cluster Error 1171 4.3.0.3: Critical Fixes IC58563 Space-efficient vdisk may be taken offline when used capacity exceeds 1022 GB High Importance Fixes IC57781 Cluster call home emails not sent IC57827 Node assert handling SCSI ordered commands IC58058 Node assert when an AIX host object is configured with host type of 'hpux' IC58210 Node warmstart caused by watchdog timeout Suggested Fixes IC58193 Unable to resize vdisk after upgrade to V4.3.0 4.3.0.2: Critical Fixes IC57677 Incorrect managed disk error recovery procedure High Importance Fixes IC57187 Node asserts on primary cluster caused by degraded performance on secondary cluster during Remote Copy IC57537 Node asserts when an AIX host object is configured with host type of 'hpux' IC57791 Node assert when running svcinfo catauditlog during software upgrade 4.3.0.1: High Importance Fixes IC54720 Node assert when attempting to scroll Japanese text on front panel display IC56517 SVC write cache performance issues IC56796 Node assert when issuing a report LUNs with a large allocation length IC57090 Node assert when removing the same WWPN from a host twice in the same command IC56559 Global TPRLO FC frame causes SVC warmstarts 4.3.0.0 Critical Fixes IC54750 Node assert caused by offline auxiliary vdisks in an active Remote Copy relationship IC55722 & IC56208 Node Error 90x or Cluster Error 1001 when moving a vdisk between I/O groups IC56226 Node asserts when upgrading with Metro Mirror relationships running IC56412 Cluster Error 1001 when powering on a cluster with a high number of image mode vdisks IC56963 Cluster Error 1001 when creating incremental FlashCopy mappings after cluster upgrade High Importance Fixes IC53156 Managed disk in inconsistent state can cause Node Error 90x or Cluster Error 1001 IC53253 Node assert processing large numbers of SCSI Aborts IC54451 Node Error 578 starting configuration node services IC54469 Managed disks not detected if new controller ports are zoned to SVC before being correctly configured IC54983 Node assert caused by receiving large numbers of duplicate fibre channel frames IC55084 Node assert when unmapping vdisks from an AIX/VIO host whilst I/O is still running (using ACA) IC55191 Node assert caused by large number of failed attempts to send email IC55448 Managed disks excluded during EMC CX firmware upgrade IC55524 Managed disks taken offline when adding unconfigured controller logins IC55558 Node assert when using svctask addhostiogrp or rmhostiogrp commands IC55594 Node assert due to inter-node link instability IC55716 Node assert caused by Remote Copy resource handling IC56144 Node assert caused handling transient 05/25/00 SCSI check condition from storage controller Suggested Fixes IC54430 Remove stale host port logins from svcinfo views IC54658 Change to handling of SCSI-2 and SCSI-3 reservations IC54770 Addition of Cluster Error 1624 to identify persistent storage configuration problems IC54813 Use the serial number of the affected node in call home email IC55377 Improvements to managed disk discovery processing IC55549 Remove SCSI reservations and registrations when host WWPN is removed IC56305 FlashCopy background copy and cleaning rate incorrect when using non-default (64KB) grain size IC56811 Cluster Error 1627 logged in error when using storage controllers with multiple WWNNs ------------------------------------------------------------------------------- 6. Installation Instructions For New Clusters ******************************************************************************* IMPORTANT: This procedure will destroy any existing data or configuration If you wish to preserve the current SVC configuration and all data virtualized by the cluster then please refer to the cluster upgrade instructions in the Software Installation and Configuration Guide. ******************************************************************************* * THIS PROCEDURE IS FOR NEW INSTALLATIONS ONLY * Follow the instructions in the "Creating a SAN Volume Controller cluster" chapter of the Software Installation and Configuration Guide to create a cluster on ONLY the first node. Do not add the other nodes at this point. * Follow the instructions for "Viewing Cluster Properties" in the "Using the SAN Volume Controller Console" section and open the 'General' panel of the cluster properties. The SVC software version is labelled 'Licensed Code Version'. If the code version of the cluster is 4.3.1.8 (9.18.0909080000) then you do not need to perform any further actions. Continue to add nodes and configure your cluster as described in the Software Installation and Configuration Guide. Otherwise, continue to follow this procedure. * If you are not using DHCP, ensure that you have set a valid unique IP address for service mode on the existing single node cluster. * Delete the cluster from the SVC Console (GUI) by selecting the cluster from the Cluster panel, choosing 'Remove a Cluster' from the drop down box, and clicking 'Go'. * Put the node in the existing single node cluster into service mode by following the steps outlined below. This procedure is described in the "Using the front panel of the SAN Volume Controller" section, under "Recover cluster navigation" -> "Setting service mode". 1. On the front panel, navigate to the main 'Cluster:' display and then left to the 'Recover Cluster?' menu. 2. Press select. The screen should now say 'Service Access'. 3. Press and hold the down button. 4. Press and release the select button. 5. Release the down button. The node restarts and service mode is enabled. * Apply the upgrade package: 1. Open a web browser and point it the following web address, where service_ip_address is the IPv4 or IPv6 address that is shown on the front panel display of the node: https://service_ip_address 2. Enter the 'admin' user ID and password that was configured when you set up the one node cluster. 3. Click "Upgrade Software" on the left side of the web page. 4. Click the "Upload" button and upload the IBM2145_INSTALL_4.3.1.8 file. 5. Once the upload completes, press the "Continue" button. This will take you to a page with a list of available upgrade packages. 6. Select the file you just uploaded from the list of available software upgrade files and check the "Skip prerequisite checking" box. Click the "Apply" button. 7. Accept any warnings, and click the "Confirm" button. 8. The node will now reboot and apply the new software. Note: An SVC dump file may be generated during this upgrade. This is expected and can be ignored. * Once upgraded, create a new cluster on the upgraded node, following the "Creating a SAN Volume Controller cluster" chapter of the Software Installation and Configuration Guide. At this point you will have a new one node cluster running 4.3.1.8 code. Note: After re-adding the new cluster to the SVC Console (GUI), an 'Invalid SSH Fingerprint" message may be displayed. Select the 'Reset the SSH Fingerprint' option from the drop down box, then click 'Go' to resolve this issue. * After this process is complete, check that the software version number is 4.3.1.8 (9.18.0909080000) * You can now add the other nodes to the cluster and they will automatically be upgraded/downgraded as required. ------------------------------------------------------------------------------- 7. Further Documentation The latest version of all SAN Volume Controller documentation can be downloaded in PDF format from the support website: http://www.ibm.com/storage/support/2145 ------------------------------------------------------------------------------- 8. Known Issues and Restrictions in this Level Support for the legacy per-cluster (v_stats and m_stats) performance statistics will be removed in releases after V4.3.x. These legacy statistics files have been superseded by the per-node Nv_stats, Nm_stats and Nn_stats files. Refer to the following Technote for more information on these enhanced cluster performance statistics: http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003432 Please read the following flashes before upgrading: Potential Issue When Upgrading From SVC V4.1.1.0/1/2 http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003069 Offline or Degraded Disks May Result in Loss of I/O Access During Code Upgrade http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1002971 If an SVC Code Upgrade Stalls or Fails then Contact IBM Support for Further Assistance http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1002894 ------------------------------------------------------------------------------- 9. Maximum Configurations The maximum configurations for SVC 4.3.1.8 are documented in the SVC V4.3.x Restrictions document which is available from: http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003283 ------------------------------------------------------------------------------- 10. Licensing Information The SVC licensing configuration and calculation changed in SVC V4.3.0. After an upgrade to V4.3 you should expect to see one or more of the following messages in the cluster error log: 3029: Virtualization feature capacity invalid 3030: Remote Copy feature capacity not set 3031: FlashCopy feature capacity not set 3032: Feature license limit exceeded Before upgrading your SVC cluster to V4.3, ensure that you understand your base Virtualization, FlashCopy and Remote Copy license entitlement. Once the upgrade has completed, please enter the values for these three licenses using either the SVC Console (GUI) licensing page or the SVC command line interface. The warning messages listed above will not impact a running system or prevent the upgrade from completing. Please see the following web page for more information on these changes: http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003284 Licences relevant to SAN Volume Controller can be viewed using a web browser with access to the SVC cluster (such as the Internet Explorer browser on the SVC Master Console) via the following URL: http://cluster_ip_address/notices.html where cluster_ip_address is the IPv4 or IPv6 address of the cluster. -------------------------------------------------------------------------------