Release Note for systems built with IBM Spectrum Virtualize


This is the release note for the 8.2.0 release and details the issues resolved in all Program Temporary Fixes (PTFs) between 8.2.0.0 and 8.2.0.4. This document will be updated with additional information whenever a PTF is released.

Systems running this code level should be upgraded to v8.2.0.4 or later.

Note: This code level is available for FlashSystem 9100 and V7000 systems only.
Note: Customers who do not intend to cluster their V7000 systems with a FlashSystem 9100, or deploy iSER, do not need to consider this code level.

This document was last updated on 10 September 2021.

  1. New Features
  2. Known Issues and Restrictions
  3. Issues Resolved
    1. Security Issues Resolved
    2. APARs Resolved
  4. Useful Links
Note. Detailed build version numbers are included in the Update Matrices in the Useful Links section

1. New Features

The following new features have been introduced in the 8.2.0 release: The following new features have been introduced in the 8.2.0.2 release:

2. Known Issues and Restrictions

Note: For clarity, the terms "node" and "canister" are used interchangeably.
Details Introduced

Customers using iSCSI to virtualize backend controllers should not upgrade to v8.2.0 or later

This is a restriction that may be lifted in a future PTF.

8.2.0.0

Customers using iSCSI to virtualize backend controllers should not upgrade to v8.2.0 or later

This is a restriction that may be lifted in a future PTF.

8.2.0.0

Customers using iSCSI to virtualize backend controllers should not upgrade to v8.2.0 or later

This is a restriction that may be lifted in a future PTF.

8.2.0.0

Validation in the Upload Support Package feature will reject the new case number format in the PMR field.

This is a known issue that may be lifted in a future PTF. The fix can be tracked using APAR HU02392.

7.8.1.0

Systems, with NPIV enabled, presenting storage to SUSE Linux Enterprise Server (SLES) or Red Hat Enterprise Linux (RHEL) hosts running the ibmvfc driver on IBM Power can experience path loss or read-only file system events.

This is cause by issues within the ibmvfc driver and VIOS code.

Refer to this troubleshooting page for more information.

n/a
If an update stalls or fails then contact IBM Support for further assistance n/a
The following restrictions were valid but have now been lifted

Performance statistics, for FlashSystem 9100 systems running v8.2.0.0, will not be available to Spectrum Control installations prior to v5.3.0.

Storage Insights, and Spectrum Control installations from v5.3.0, will make all performance statistics, with the exception of drives, available.

This is a restriction has been lifted in a PTF.

8.2.0.0

Customers using IP Replication should not upgrade to v8.2.0 or later.

This issue has been resolved in PTF v8.2.0.2.

8.2.0.0

There is a known issue with 8-node systems and IBM Security Key Lifecycle Manager 3.0 that can cause the status of key server end points, on the system, to occasionally report as degraded or offline. The issue intermittently occurs when the system attempts to validate the key server but the server response times out to some of the nodes. When the issue occurs Error Code 1785 (A problem occurred with the Key Server) will be visible in the system event log.

This issue will not cause any loss of access to encrypted data.

7.8.0.0

3. Issues Resolved

This release contains all of the fixes included in the 8.1.3.1 release, plus the following additional fixes.

A release may contain fixes for security issues, fixes for APARs or both. Consult both tables below to understand the complete set of fixes included in the release.

3.1 Security Issues Resolved

Security issues are documented using a reference number provided by "Common Vulnerabilities and Exposures" (CVE).
CVE Identifier Link for additional Information Resolved in
CVE-2008-5161 ibm10874368 8.2.0.4
CVE-2018-5391 ibm10872368 8.2.0.4
CVE-2018-11776 ibm10741137 8.2.0.2
CVE-2017-17833 ibm10872546 8.2.0.2
CVE-2018-11784 ibm10872550 8.2.0.2
CVE-2018-5732 ibm10741135 8.2.0.0
CVE-2018-1517 ibm10872456 8.2.0.0
CVE-2018-2783 ibm10872456 8.2.0.0
CVE-2018-12539 ibm10872456 8.2.0.0
CVE-2018-1775 ibm10872486 8.2.0.0

3.2 APARs Resolved

Show details for all APARs
APAR Affected Products Severity Description Resolved in Feature Tags
HU01918 All HIPER Where Data Reduction Pools have been created on earlier code levels, upgrading the system, to an affected release, can cause an increase in the level of concurrent flushing to disk. This may result in a loss of access to data. For more details refer to this Flash (show details)
Symptom Loss of Access to Data
Environment Systems running v8.1.3.4, v8.2.0.3 or v8.2.1.x using Data Reduction Pools
Trigger None
Workaround None
8.2.0.4 Data Reduction Pools
HU01920 All Critical An issue in the garbage collection process can cause node warmstarts and offline pools (show details)
Symptom Offline Volumes
Environment Systems using Data Reduction Pools
Trigger None
Workaround None
8.2.0.4 Data Reduction Pools
HU01906 FS9100 HIPER Low-level hardware errors may not be recovered correctly, causing a canister to reboot. If multiple canisters reboot, this may result in loss of access to data (show details)
Symptom Multiple Node Warmstarts
Environment FlashSystem 9100 family systems
Trigger None
Workaround None
8.2.0.3 Reliability Availability Serviceability
HU01862 All Critical When a Data Reduction Pool is removed, and the -force option is specified, there may be a temporary loss of access (show details)
Symptom Loss of Access to Data
Environment Systems using Data Reduction Pools
Trigger Remove a Data Reduction Pool with the -force option
Workaround Do not use -force option when removing a Data Reduction Pool
8.2.0.3 Data Reduction Pools
HU01876 All Critical Where systems are connected to controllers, that have FC ports that are capable of acting as initiators and targets, when NPIV is enabled then node warmstarts can occur (show details)
Symptom Loss of Access to Data
Environment Systems, with NPIV enabled, attached to host ports that can act as SCSI initiators and targets
Trigger Zone host initiator and target ports in with the target port WWPN then enable NPIV
Workaround Unzone host or disable NPIV
8.2.0.3 Backend Storage
HU01885 All Critical As writes are made to a Data Reduction Pool it is necessary to allocate new physical capacity. Under unusual circumstances it is possible for the handling of an expansion request to stall further I/O leading to node warmstarts (show details)
Symptom Multiple Node Warmstarts
Environment Systems using Data Reduction Pools
Trigger None
Workaround None
8.2.0.3 Data Reduction Pools
HU01934 FS9100 High Importance An issue in the handling of faulty canister components can lead to multiple node warmstarts for that canister (show details)
Symptom Multiple Node Warmstarts
Environment FlashSystem 9100 family systems
Trigger None
Workaround None
8.2.0.3 Reliability Availability Serviceability
HU01821 SVC Suggested An attempt to upgrade a two-node enhanced stretched cluster fails due to incorrect volume dependencies (show details)
Symptom None
Environment Systems configured as a two-node enhanced stretched cluster that are using Data Reduction Pools
Trigger Upgrade
Workaround Revert cluster to standard topology and remove site settings from nodes and controllers for the duration of the upgrade
8.2.0.3 Data Reduction Pools, System Update
HU01849 All Suggested An excessive number of SSH sessions may lead to a node warmstart (show details)
Symptom Single Node Warmstart
Environment All systems
Trigger Initiate a large number of SSH sessions (e.g. one session every 5 seconds)
Workaround Avoid initiating excessive numbers of SSH sessions
8.2.0.3 System Monitoring
IT25457 All Suggested Attempting to remove a copy of a volume, which has at least one image mode copy and at least one thin/compressed copy, in a Data Reduction Pool will always fail with a CMMVC8971E error (show details)
Symptom None
Environment Systems using Data Reduction Pools
Trigger Try to remove a copy of a volume, which has at least one image mode copy and at least one thin/compressed copy, in a Data Reduction Pool
Workaround Use svctask splitvdiskcopy to create a separate volume from the copy that should be deleted. This leaves the original volume with a single copy and creates a new volume from the copy that was split off. Then remove the newly created volume.
8.2.0.3 Data Reduction Pools
IT26049 All Suggested An issue with CPU scheduling may cause the GUI to respond slowly (show details)
Symptom None
Environment Systems running v7.8 or later
Trigger None
Workaround None
8.2.0.3 Graphical User Interface
HU01828 All HIPER Node warmstarts may occur during deletion of deduplicated volumes due to a timing-related issue (show details)
Symptom Loss of Access to Data
Environment Systems using deduplicated volume copies
Trigger Deleting a deduplication volume copy
Workaround Do not delete deduplicated volume copies
8.2.0.2 Deduplication
HU01847 All Critical FlashCopy handling of medium errors across a number of drives on backend controllers may lead to multiple node warmstarts (show details)
Symptom Loss of Access to Data
Environment Systems running v7.8.1 or later using FlashCopy
Trigger None
Workaround None
8.2.0.2 FlashCopy
HU01850 All Critical When the last deduplication-enabled volume copy in a Data Reduction Pool is deleted the pool may go offline temporarily (show details)
Symptom Loss of Access to Data
Environment Systems using Data Reduction Pools with deduplicated volume copies
Trigger Delete last deduplication-enabled volume copy in a Data Reduction Pool
Workaround If a Data Reduction Pool contains volumes with deduplication enabled keep at least one of those volumes in the pool
8.2.0.2 Data Reduction Pools, Deduplication
HU02042 All Critical An issue in the handling of metadata, after a Data Reduction Pool recovery operation, can lead to repeated node warmstarts, putting an I/O group into a service state (show details)
Symptom Loss of Access to Data
Environment Systems using Data Reduction Pools
Trigger T3 recovery
Workaround None
8.2.0.2 Data Reduction Pools
HU01852 All High Importance The garbage collection rate can lead to Data Reduction Pools running out of space even though reclaimable capacity is available (show details)
Symptom None
Environment Systems using Data Reduction Pools
Trigger None
Workaround None
8.2.0.2 Data Reduction Pools
HU01858 All High Importance Total used capacity of a Data Reduction Pool within a single I/O group is limited to 256TB. Garbage collection does not correctly recognise this limit. This may lead to a pool running out of free capacity and going offline (show details)
Symptom None
Environment Systems using Data Reduction Pools
Trigger None
Workaround None
8.2.0.2 Data Reduction Pools
HU01881 FS9100 High Importance An issue within the compression card in FS9100 systems can result in the card being incorrectly flagged as failed leading to warmstarts (show details)
Symptom Loss of Redundancy
Environment FS9100 systems
Trigger None
Workaround None
8.2.0.2 Compression
HU01564 All Suggested FlashCopy maps cleaning process is not monitoring the grains correctly which may cause FlashCopy maps to not stop (show details)
Symptom None
Environment Systems using FlashCopy
Trigger None
Workaround None
8.2.0.2 FlashCopy
HU01760 All Suggested FlashCopy map progress appears to be stuck at zero percent (show details)
Symptom None
Environment Systems using FlashCopy
Trigger None
Workaround None
8.2.0.2 FlashCopy
HU01815 All Suggested In Data Reduction Pools, volume size is limited to 96TB (show details)
Symptom None
Environment Systems using Data Reduction Pools
Trigger None
Workaround None
8.2.0.2 Data Reduction Pools
HU01851 All HIPER When a deduplicated volume is deleted there may be multiple node warmstarts and offline pools (show details)
Symptom Loss of Access to Data
Environment Systems running v8.1.3 or later using Deduplication
Trigger Delete a deduplicated volume
Workaround None
8.2.0.1 Data Reduction Pools, Deduplication
HU01913 All HIPER A timing window issue in the DRAID6 rebuild process can cause node warmstarts with the possibility of a loss of access (show details)
Symptom Loss of Access to Data
Environment Systems using DRAID
Trigger None
Workaround None
8.2.0.0 Distributed RAID
HU01758 All Critical After an unexpected power loss, all nodes, in a cluster, may warmstart repeatedly, necessitating a Tier 3 recovery (show details)
Symptom Loss of Access to Data
Environment All systems
Trigger Power outage
Workaround None
8.2.0.0 RAID
HU01848 All Critical During an upgrade, systems with a large AIX VIOS setup may have multiple node warmstarts with the possibility of a loss of access to data (show details)
Symptom Loss of Access to Data
Environment Systems presenting storage to large IBM AIX VIOS configurations
Trigger None
Workaround None
8.2.0.0 System Update
IT25850 All Critical I/O performance may be adversely affected towards the end of DRAID rebuilds. For some systems there may be multiple warmstarts leading to a loss of access (show details)
Symptom Loss of Access to Data
Environment Systems using DRAID
Trigger None
Workaround None
8.2.0.0 Distributed RAID
HU01661 All High Importance A cache-protection mechanism flag setting can become stuck leading to repeated stops of consistency group synchronisation (show details)
Symptom Loss of Redundancy
Environment Systems running v7.6 or later using remote copy
Trigger None
Workaround None
8.2.0.0 HyperSwap
HU01733 All High Importance Canister information, for the High Density Expansion Enclosure, may be incorrectly reported (show details)
Symptom Loss of Redundancy
Environment Systems using the High Density Expansion Enclosure (92F)
Trigger None
Workaround None
8.2.0.0 Reliability Availability Serviceability
HU01761 All High Importance Entering multiple addmdisk commands, in rapid succession, to more than one storage pool, may cause node warmstarts (show details)
Symptom Multiple Node Warmstarts
Environment Systems running v8.1 or later with two or more storage pools
Trigger Run multiple addmdisk commands to more than one storage pool at the same time
Workaround Paced addmdisk commands to one storage pool at a time
8.2.0.0 Backend Storage
HU01797 All High Importance Hitachi G1500 backend controllers may exhibit higher than expected latency (show details)
Symptom Performance
Environment Systems with Hitachi G1500 backend controllers
Trigger None
Workaround None
8.2.0.0 Backend Storage
HU00921 All Suggested A node warmstart may occur when an MDisk state change gives rise to duplicate discovery processes (show details)
Symptom Single Node Warmstart
Environment All systems
Trigger None
Workaround None
8.2.0.0
HU01276 All Suggested An issue in the handling of debug data from the FC adapter can cause a node warmstart (show details)
Symptom Single Node Warmstart
Environment Systems using 16Gb HBAs
Trigger None
Workaround None
8.2.0.0 Reliability Availability Serviceability
HU01523 All Suggested An issue with FC adapter initialisation can lead to a node warmstart (show details)
Symptom Single Node Warmstart
Environment Systems using 16Gb HBAs
Trigger None
Workaround None
8.2.0.0 Reliability Availability Serviceability
HU01571 All Suggested An upgrade can become stalled due to a node warmstart (show details)
Symptom Single Node Warmstart
Environment Systems undergoing a code upgrade
Trigger None
Workaround None
8.2.0.0 System Update
HU01657 SVC, V7000, V5000 Suggested The 16Gb FC HBA firmware may experience an issue, with the detection of unresponsive links, leading to a single node warmstart (show details)
Symptom Single Node Warmstart
Environment Systems using 16Gb HBAs
Trigger None
Workaround None
8.2.0.0 Reliability Availability Serviceability
HU01667 All Suggested A timing-window issue, in the remote copy component, may cause a node warmstart (show details)
Symptom Single Node Warmstart
Environment Systems using remote copy
Trigger None
Workaround None
8.2.0.0 Global Mirror, Global Mirror with Change Volumes, Metro Mirror
HU01719 All Suggested Node warmstart due to a parity error in the HBA driver firmware (show details)
Symptom Single Node Warmstart
Environment Systems running v7.6 and later using 16Gb HBAs
Trigger None
Workaround None
8.2.0.0 Reliability Availability Serviceability
HU01737 All Suggested On the Update System screen, for Test Only, if a valid code image is selected, in the Run Update Test Utility dialog, then clicking the Test button will initiate a system update (show details)
Symptom None
Environment All systems
Trigger Select a valid code image in the "Run Update Test Utility" dialog and click "Test" button
Workaround Do not select a valid code image in the "Test utility" field of the "Run Update Test Utility" dialog
8.2.0.0 System Update
HU01765 All Suggested Node warmstart may occur when there is a delay to I/O at the secondary site (show details)
Symptom Single Node Warmstart
Environment Systems using remote copy
Trigger None
Workaround None
8.2.0.0 Global Mirror, Global Mirror with Change Volumes, Metro Mirror
HU01786 All Suggested An issue in the monitoring of SSD write endurance can result in false 1215/2560 errors in the Event Log (show details)
Symptom None
Environment Systems running v7.7.1 or later with SSDs
Trigger None
Workaround None
8.2.0.0 Drives
HU01791 All Suggested Using the chhost command will remove stored CHAP secrets (show details)
Symptom Configuration
Environment Systems using iSCSI
Trigger Run the "chhost -gui -name <host name> <host id>" command after configuring CHAP secret
Workaround Set the CHAP secret whenever changing the host name
8.2.0.0 iSCSI
HU01807 All Suggested The lsfabric command may show incorrect local node id and local node name for some Fibre Channel logins (show details)
Symptom None
Environment All systems
Trigger None
Workaround Use the local WWPN and reference the node in lsportfc to get the correct information
8.2.0.0 Command Line Interface
HU01811 All Suggested DRAID rebuilds, for large (>10TB) drives, may require lengthy metadata processing leading to a node warmstart (show details)
Symptom Single Node Warmstart
Environment Systems using DRAID
Trigger None
Workaround None
8.2.0.0 Distributed RAID
HU01817 All Suggested Volumes used for vVols metadata or cloud backup, that are associated with a FlashCopy mapping, cannot be included in any further FlashCopy mappings (show details)
Symptom Configuration
Environment Systems using vVols or TCT
Trigger None
Workaround None
8.2.0.0 FlashCopy
HU01856 All Suggested A garbage collection process can time out waiting for an event in the partner node resulting in a node warmstart (show details)
Symptom Single Node Warmstart
Environment Systems using Data Reduction Pools
Trigger None
Workaround None
8.2.0.0 Data Reduction Pools
HU02028 All Suggested An issue, with timer cancellation, in the Remote Copy component may cause a node warmstart (show details)
Symptom Single Node Warmstart
Environment Systems using Remote Copy
Trigger None
Workaround None
8.2.0.0 Global Mirror, Global Mirror with Change Volumes, Metro Mirror
IT19561 All Suggested An issue with register clearance in the FC driver code may cause a node warmstart (show details)
Symptom Single Node Warmstart
Environment Systems using 16Gb HBAs
Trigger None
Workaround None
8.2.0.0 Reliability Availability Serviceability

4. Useful Links

Description Link
Support Websites
Update Matrices, including detailed build version
Support Information pages providing links to the following information:
  • Interoperability information
  • Product documentation
  • Limitations and restrictions, including maximum configuration limits
Spectrum Virtualize Family of Products Inter-System Metro Mirror and Global Mirror Compatibility Cross Reference
Software Upgrade Test Utility
Software Upgrade Planning