Release Note for systems built with IBM Spectrum Virtualize


This is the release note for the 8.4 release and details the issues resolved in all Program Temporary Fixes (PTFs) between 8.4.0.0 and 8.4.0.16. This document will be updated with additional information whenever a PTF is released.

Note: The v8.4.0.4 release is only provided, pre-installed, on new systems and will not be available on IBM Fix Central.

This document was last updated on 12 March 2025.

  1. New Features
  2. Known Issues and Restrictions
  3. Issues Resolved
    1. Security Issues Resolved
    2. APARs Resolved
  4. Useful Links

Note. Detailed build version numbers are included in the Update Matrices in the Useful Links section.


1. New Features

The following new features have been introduced in the 8.4.0.13 release: The following new features have been introduced in the 8.4.0.1 release: The following new features have been introduced in the 8.4.0 release:

2. Known Issues and Restrictions

Note: For clarity, the terms "node" and "canister" are used interchangeably.
Details Introduced

The three site orchestrator is not compatible with the new SSH security level 4.This will be resolved in a future release of the orchestrator.

8.4.0.13

Due to a known issue which may occur following a cluster outage while a DRAID1 array is expanding, expansion of DRAID1 arrays is not supported on 8.4.0 and higher.

This is a known issue that will be lifted in a future PTF. The fix can be tracked using APAR SVAPAR-132123.

8.4.0.0

Customers planning to upgrade to v8.4.0 or later should be aware that an update to OpenSSH has terminated support for all DSA keys. An update to OpenSSL has also terminated support for 1024-bit RSA keys used in SSL certificates.

Customers currently using DSA public keys for SSH access will need to generate new keys using alternative ciphers, such as RSA or ECDSA. If using RSA public keys for SSH access, it is recommended to use keys of 2048 bits or longer.

Customers currently using 1024-bit RSA keys in SSL certificates will need to generate new SSL certificates using 2048-bit RSA keys or ECDSA keys. This applies not only to the system certificate, but also to any SSL certificates used by external services such as LDAP servers.

8.4.0.0

There is an existing limit on the number of files that can be returned by the CLI of approximately 780 entries. In many configurations this limit is of no concern. However, due to a problem with hot-spare node I/O stats files, 8-node clusters with many hardware upgrades or multiple spare nodes may see up to 900 I/O stats files. As a consequence the data collector for Storage Insights and Spectrum Control cannot list or download the required set of performance statistics data. The result is that there are many gaps in the performance data, leading to errors with the performance monitoring tools and a lack of performance history.

The workaround is to remove the files associated with spare nodes or previously/updated hardware using the cleardumps command (or to cleardumps the entire iostats directory).

This is a known issue that will be lifted in a future PTF. The fix can be tracked using APAR HU02403.

8.4.0.0

Customers using Spectrum Control v5.3.2, or earlier, may notice some discrepancies between attributes displayed and related attributes on the Spectrum Virtualize GUI.

This issue will be resolved by a future release of Spectrum Control.

8.3.1.0

Under some circumstances, the "Monitoring > System" GUI screen may not load completely.

This is a known issue that will be lifted in a future PTF.

8.3.1.0

Customers using iSER attached hosts, with Mellanox 25G adapters, should be aware that IPv6 sessions will not failover, for example, during a cluster upgrade.

This is a known issue that may be lifted in a future PTF.

8.3.0.0

Systems, with NPIV enabled, presenting storage to SUSE Linux Enterprise Server (SLES) or Red Hat Enterprise Linux (RHEL) hosts running the ibmvfc driver on IBM Power can experience path loss or read-only file system events.

This is cause by issues within the ibmvfc driver and VIOS code.

Refer to this troubleshooting page for more information.

n/a
If an update stalls or fails then contact IBM Support for further assistance n/a
The following restrictions were valid but have now been lifted

For systems using remote copy without the 3-site functionality it is possible to trigger a resource leak when opening the remote copy panel in the GUI. After the remote copy panel has been opened multiple times this resource leak can reach the point where the GUI becomes unresponsive.

Restarting the GUI tomcat service will restore GUI access.

This issue has been resolved by APAR HU02319.

8.4.0.1

Due to an issue in IP Replication, customers using this feature should not upgrade to v8.4.0.0 or later.

This issue has been resolved by APAR HU02340.

8.4.0.0

Customers with SVC model SV1 systems that have iWARP 25G adapters, presenting storage to iSCSI hosts, should not upgrade those systems to v8.4.0.0 or later.

This issue has been resolved by APAR HU02301.

8.4.0.0

Due to a performance issue, customers using Enhanced Callhome with censored mode should not upgrade to v8.4.0.0 or later.

This issue has been resolved by APAR HU02300.

8.4.0.0

Configuration node warmstart will occur if mkhostcluster is run with -ignoreseedvolume and the ignored volumes have an id greater than 256.

This issue has been resolved by APARs HU02303 & HU02305 in PTF v8.4.0.2.

8.3.1.0

Validation in the Upload Support Package feature will reject the new case number format in the PMR field.

This issue has been resolved by APAR HU02392.

7.8.1.0

3. Issues Resolved

This release contains all of the fixes included in the 8.3.1.2 release, plus the following additional fixes.

A release may contain fixes for security issues, fixes for APARs or both. Consult both tables below to understand the complete set of fixes included in the release.

3.1 Security Issues Resolved

Security issues are documented using a reference number provided by "Common Vulnerabilities and Exposures" (CVE).
CVE Identifier
Link for additional Information
Resolved in
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
CVE-2024-21235 7181926 8.4.0.16
CVE-2024-21217 7181926 8.4.0.16
CVE-2024-21210 7181926 8.4.0.16
CVE-2024-21208 7181926 8.4.0.16
CVE-2024-10917 7181926 8.4.0.16
CVE-2023-29483 7181927 8.4.0.16
CVE-2024-1737 7181928 8.4.0.16
CVE-2024-1975 7181928 8.4.0.16
CVE-2023-52881 7181929 8.4.0.16
CVE-2023-1073 7161786 8.4.0.15
CVE-2023-45871 7161786 8.4.0.15
CVE-2023-6356 7161786 8.4.0.15
CVE-2023-6535 7161786 8.4.0.15
CVE-2023-6536 7161786 8.4.0.15
CVE-2023-1206 7161786 8.4.0.15
CVE-2023-5178 7161786 8.4.0.15
CVE-2024-2961 7161779 8.4.0.15
CVE-2023-50387 7161793 8.4.0.15
CVE-2023-50868 7161793 8.4.0.15
CVE-2020-28241 7161793 8.4.0.15
CVE-2023-4408 7161793 8.4.0.15
CVE-2023-44487 7156535 8.4.0.14
CVE-2023-1667 7156535 8.4.0.14
CVE-2023-2283 7156535 8.4.0.14
CVE-2024-20952 7156536 8.4.0.14
CVE-2024-20918 7156536 8.4.0.14
CVE-2024-20921 7156536 8.4.0.14
CVE-2024-20919 7156536 8.4.0.14
CVE-2024-20926 7156536 8.4.0.14
CVE-2024-20945 7156536 8.4.0.14
CVE-2023-33850 7156536 8.4.0.14
CVE-2024-23672 7156538 8.4.0.14
CVE-2024-24549 7156538 8.4.0.14
CVE-2023-48795 7154643 8.4.0.13
CVE-2023-22081 7114770 8.4.0.13
CVE-2023-22067 7114770 8.4.0.13
CVE-2023-5676 7114770 8.4.0.13
CVE-2023-46589 7114769 8.4.0.13
CVE-2023-45648 7114769 8.4.0.13
CVE-2023-42795 7114769 8.4.0.13
CVE-2024-21733 7114769 8.4.0.13
CVE-2023-50164 7114768 8.4.0.13
CVE-2023-43042 7064976 8.4.0.12
CVE-2023-21930 7065011 8.4.0.12
CVE-2023-21937 7065011 8.4.0.12
CVE-2023-21938 7065011 8.4.0.12
CVE-2023-34396 7065010 8.4.0.12
CVE-2022-21626 6858041 8.4.0.10
CVE-2022-1012 6858043 8.4.0.10
CVE-2021-45485 6858043 8.4.0.10
CVE-2021-45486 6858043 8.4.0.10
CVE-2022-43873 6858047 8.4.0.10
CVE-2022-42252 6858039 8.4.0.10
CVE-2022-43870 6858045 8.4.0.10
CVE-2022-0778 6622017 8.4.0.9
CVE-2021-35603 6622019 8.4.0.9
CVE-2021-35550 6622019 8.4.0.9
CVE-2018-25032 6622021 8.4.0.7
CVE-2022-25762 6622023 8.4.0.7
CVE-2021-38969 6584337 8.4.0.6
CVE-2021-42340 6541270 8.4.0.6
CVE-2021-29873 6497111 8.4.0.5
CVE-2020-10732 6497113 8.4.0.5
CVE-2020-10774 6497113 8.4.0.5
CVE-2021-33037 6497115 8.4.0.5
CVE-2020-2781 6445063 8.4.0.0
CVE-2020-13935 6445063 8.4.0.0
CVE-2020-14577 6445063 8.4.0.0
CVE-2020-14578 6445063 8.4.0.0
CVE-2020-14579 6445063 8.4.0.0

3.2 APARs Resolved

Show details for all APARs
APAR
Affected Products
Severity
Description
Resolved in
Feature Tags
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
SVAPAR-94179 FS5100, FS5200, FS7200, FS9100, FS9200, V7000 HIPER Faulty hardware within or connected to the CPU can result in a reboot on the affected node. However it is possible for this to sometimes result in a reboot on the partner node (show details) 8.4.0.12 Reliability Availability Serviceability
SVAPAR-89694 All HIPER Kernel panics might occur on a subset of Spectrum Virtualize Hardware Platforms with a 10G Ethernet adapter running 8.4.0.10, 8.5.0.7 and 8.5.3.1 when taking a snap. For more details refer to this Flash (show details) 8.4.0.11
SVAPAR-84116 All Critical The background delete processing for deduplicated volumes might not operate correctly if the preferred node for a deduplicated volume is changed while a delete is in progress. This can result in data loss which will be detected by the cluster when the data is next accessed (show details) 8.4.0.11 Data Reduction Pools, Deduplication
HU02420 All Critical During an array copyback it is possible for a memory leak to result in the progress stalling and an node warmstart of all nodes resulting in a temporary loss of access (show details) 8.4.0.10 RAID
HU02471 All Critical After starting a FlashCopy map with -restore in a graph with a GMCV secondary disk that was stopped with -access there can be a data integrity issue (show details) 8.4.0.10 FlashCopy, Global Mirror With Change Volumes
HU02561 All Critical If there are a high number of FC mappings sharing the same target, the internal array that is used to track the FC mapping is mishandled, thereby causing it to overrun. This will cause a cluster wide warmstart to occur (show details) 8.4.0.10 FlashCopy
HU02563 All Critical Improve dimm slot identification for memory errors (show details) 8.4.0.10 Reliability Availability Serviceability
IT41088 FS5000, FS5100, FS5200, V5000, V5100 Critical Systems with low memory that have a large number of RAID arrays that are resyncing can cause a system to run out of RAID rebuild control blocks (show details) 8.4.0.10 RAID
SVAPAR-86139 All Critical Failover for VMware iSER hosts may pause I/O for more than 120 seconds (show details) 8.4.0.10 Hosts
HU01782 All High Importance A node warmstart may occur due to a potentially bad SAS hardware component on the system such as a SAS cable, SAS expander or SAS HIC (show details) 8.4.0.10 Drives
HU02010 All High Importance A single node warmstart may occur when a drive in a non-distributed RAID array is taken temporarily out-of-sync due to slow performance (show details) 8.4.0.10 RAID
HU02088 All High Importance There can be multiple node warmstarts when no mailservers are configured (show details) 8.4.0.10 System Monitoring
HU02439 All High Importance An IP partnership between a pre-v8.4.2 system and v8.4.2 or later system may be disconnected because of a keepalive timeout (show details) 8.4.0.10 IP Replication
HU02555 All High Importance A node may warmstart if the system is configured for remote authorization, but no remote authorization service, such as LDAP, has been configured (show details) 8.4.0.10 LDAP
HU02562 All High Importance A node can warmstart when a 32 Gb Fibre Channel adapter receives an unexpected asynchronous event via internal mailbox commands. This is a transient failure caused during DMA operations (show details) 8.4.0.10
HU02571 All High Importance In a Hyperswap cluster, a Tier 2 recovery may occur after manually shutting down both nodes that are in one IO group (show details) 8.4.0.10 Distributed RAID, HyperSwap, RAID
HU02597 All High Importance A single node may warmstart to recover from the situation were different fibres update the completed count for the allocation extent in question (show details) 8.4.0.10 Data Reduction Pools
IT41447 All High Importance When removing the DNS server configuration, a node may discover unexpected metadata and warmstart (show details) 8.4.0.10 Reliability Availability Serviceability
IT41835 All High Importance A T2 recovery may occur when a failed drive in the system is replaced with an unsupported drive type (show details) 8.4.0.10 Drives
SVAPAR-83290 FS5000 High Importance An issue with the Trusted Platform Module (TPM) in FlashSystem 50xx nodes may cause the TPM to become unresponsive. This can happen after a number of weeks of continuous runtime. (show details) 8.4.0.10
SVAPAR-84305 All High Importance A node may warmstart when attempting to run 'chsnmpserver -community' command without any additional parameter (show details) 8.4.0.10 System Monitoring
SVAPAR-84331 All High Importance A node may warmstart when the 'lsnvmefabric -remotenqn' command is run (show details) 8.4.0.10 NVMe
SVAPAR-85396 FS5000, FS5100, FS5200, FS7200, FS9100, FS9200 High Importance Replacement Samsung NVME drives may show as unsupported, or they may fail during a firmware upgrade as unsupported, due to a VPD read problem (show details) 8.4.0.10 Drives
SVAPAR-85980 All High Importance iSCSI response times may increase on some systems with 25Gb ethernet adapters, after upgrade to 8.4.0.9 or 8.5.x (show details) 8.4.0.10 Performance, System Update
SVAPAR-86035 All High Importance Whilst completing a request, a DRP pool attempts to allocate additional metadata space, but there is no free space available. This causes the node to warmstart (show details) 8.4.0.10 Data Reduction Pools
HU02367 All Suggested An issue with how RAID handles drive failures may lead to a node warmstart (show details) 8.4.0.10 RAID
HU02372 FS9100, SVC, V5000, V5100, V7000 Suggested Host SAS port 4 is missing from the GUI view on some systems. (show details) 8.4.0.10 Graphical User Interface
HU02391 All Suggested An issue with how websockets connections are handled can cause the GUI to become unresponsive requiring a restart of the Tomcat server (show details) 8.4.0.10 Graphical User Interface
HU02443 All Suggested An inefficiency in the RAID code that processes requests to free memory can cause the request to timeout leading to a node warmstart (show details) 8.4.0.10 RAID
HU02453 All Suggested It may not be possible to connect to GUI or CLI without a restart of the Tomcat server (show details) 8.4.0.10 Command Line Interface, Graphical User Interface
HU02463 All Suggested LDAP user accounts can become locked out because of multiple failed login attempts (show details) 8.4.0.10 Graphical User Interface, LDAP
HU02559 All Suggested A GUI resource issue may cause an out-of-memory condition, leading to the CIMOM and GUI becoming unresponsive, or showing incomplete information (show details) 8.4.0.10 Graphical User Interface
HU02564 All Suggested The 'charraymember' command fails with a degraded DRAID array, even though the syntax of the command is correct (show details) 8.4.0.10 Distributed RAID
HU02593 All Suggested NVMe drive is incorrectly reporting end of life due to flash degradation (show details) 8.4.0.10 Drives
IT42403 All Suggested A limit is in place to prevent the use of 8TB drives or larger in RAID5 arrays due to the risk of data loss during an extended rebuild. This limit was intended for 8 TiB however it was implemented as 8TB. A 7.3TiB drive has a capacity of 8.02TB and as a result was incorrectly prevented from use in RAID5 (show details) 8.4.0.10 Distributed RAID, Drives, RAID
HU02475 All HIPER Power outage can cause reboots on nodes with 25Gb ethernet adapters, necessitating T3 recovery (show details) 8.4.0.9 Reliability Availability Serviceability
HU02511 All High Importance Code version 8.5.0 includes a change in the driver setting for the 25Gb ethernet adapter. This change can cause port errors, which in turn can cause iSCSI path loss symptoms (show details) 8.4.0.9 Host Cluster, Hosts, SCSI Unmap, iSCSI
HU02532 All High Importance Nodes that are running 8.4.0.7 or 8.4.0.8, or upgrading to either of these levels may suffer asserts if NVME hosts are configured (show details) 8.4.0.9 NVMe, System Update
HU02518 FS5000, SVC, V5000, V7000 Critical Certain hardware platforms running 8.4.0.7 have an issue with the Trusted Platform Module (TPM). This causes issues communicating with encryption keyservers and invalid SSL certificates (show details) 8.4.0.8 Encryption
HU02402 All Critical The remote support feature may use more memory than expected causing a temporary loss of access (show details) 8.4.0.7 Support Remote Assist
HU02455 All Critical After converting a system from 3-site to 2-site a timing window issue can trigger a cluster tier 2 recovery (show details) 8.4.0.7 3-Site using HyperSwap or Metro Mirror
HU02482 All Critical Issue with 25Gb ethernet adapter card firmware can cause the node to warmstart should a specific signal be received from the iSer switch. It is possible for this signal to be propagated to all nodes resulting in a loss of access to data (show details) 8.4.0.7 Interoperability, iSCSI
IT33912 All Critical A multi-drive code download may fail resulting in a Tier 2 recovery (show details) 8.4.0.7 Drives
IT41173 FS5200 Critical If the temperature sensor in an FS5200 system fails in a particular way, it is possible for drives to be powered off, causing a loss of access to data. This type of temperature sensor failure is very rare. (show details) 8.4.0.7 Reliability Availability Serviceability
HU02297 All High Importance Error handling for a failing backend controller can lead to multiple warmstarts (show details) 8.4.0.7 Backend Storage
HU02339 All High Importance Multiple node warmstarts can occur if a system has direct Fibre Channel connections to an IBM i host, causing loss of access to data (show details) 8.4.0.7 Hosts, Interoperability
HU02370 All High Importance Replacing a drive will cause copyback to start, which can cause multiple node warmstarts to occur. (show details) 8.4.0.7 RAID
HU02466 All High Importance An issue in the handling of drive failures can result in multiple node warmstarts (show details) 8.4.0.7 RAID
HU02479 All High Importance If an NVMe host cancels a large number of I/O requests, multiple node warmstarts might occur (show details) 8.4.0.7 Hosts
HU02497 All High Importance A system with direct Fibre Channel connections to a host, or to another Spectrum Virtualize system, might experience multiple node warmstarts (show details) 8.4.0.7 Hosts, Interoperability
HU02512 FS5000 High Importance An FS5000 system with a Fibre Channel direct-attached host can experience multiple node warmstarts (show details) 8.4.0.7 Hosts
HU01209 All Suggested It is possible for the Fibre Channel driver to be offered an unsupported length of data resulting in a node warmstart (show details) 8.4.0.7 Storage Virtualisation
HU02171 All Suggested The timezone for Iceland is set incorrectly (show details) 8.4.0.7 Support Data Collection
HU02174 All Suggested A timing window issue related to remote copy memory allocation can result in a node warmstart (show details) 8.4.0.7 Global Mirror, Global Mirror With Change Volumes, Metro Mirror
HU02335 All Suggested Cannot properly set the site for a host in a multi-site configuration (hyperswap or stretched) via the GUI (show details) 8.4.0.7 3-Site using HyperSwap or Metro Mirror, Graphical User Interface, HyperSwap
HU02386 FS5100, FS7200, FS9100, FS9200, V7000 Suggested Enclosure fault LED can remain on due to race condition when location LED state is changed (show details) 8.4.0.7 System Monitoring
HU02450 FS5100, FS5200, FS7200, FS9100, FS9200, SVC, V5100, V7000 Suggested A defect in the frame switching functionality of 32Gbps HBA firmware can cause a node warmstart (show details) 8.4.0.7 Hosts
HU02452 FS5100, FS5200, FS7200, FS9100, FS9200, SVC, V5100, V7000 Suggested An issue in NVMe I/O write functionality can cause a single node warmstart (show details) 8.4.0.7 NVMe
HU02474 All Suggested An SFP failure can cause a node warmstart (show details) 8.4.0.7 Reliability Availability Serviceability
IT33996 All Suggested An issue in RAID where unreserved resources fail to be freed up can result in a node warmstart (show details) 8.4.0.7 RAID
IT40059 FS5200, FS7200, FS9200 Suggested Port to node metrics can appear inflated due to an issue in performance statistics aggregation (show details) 8.4.0.7 Inter-node messaging, System Monitoring
HU02296 All HIPER The zero page functionality can become corrupt causing a volume to be initialised with non-zero data (show details) 8.4.0.6 Storage Virtualisation
HU02226 All Critical Due to an issue in DRP a node can repeatedly warmstart whilst rejoining a cluster (show details) 8.4.0.6 Data Reduction Pools
HU02374 SVC, V5000, V7000 Critical Hosts with Emulex 16Gbps HBAs may become unable to communicate with a system with 8Gbps Fibre Channel ports, after the host HBA is upgraded to firmware version 12.8.364.11. This does not apply to systems with 16Gb or 32Gb Fibre Channel ports (show details) 8.4.0.6 Hosts
HU02409 All Critical If rmhost command, with -force, is executed for a MS Windows server then an issue in the iSCSI driver can cause the relevant target initiator to become unresponsive (show details) 8.4.0.6 Hosts, iSCSI
HU02410 SVC Critical A timing window issue in the transition to a spare node can cause a cluster-wide Tier 2 recovery (show details) 8.4.0.6 Hot Spare Node
HU02423 All Critical Volume copies may be taken offline even though there is sufficient free capacity (show details) 8.4.0.6 Data Reduction Pools
HU02428 All Critical Issuing a movevdisk CLI command immediately after removing an associated GMCV relationship can trigger a Tier 2 recovery (show details) 8.4.0.6 Command Line Interface, Global Mirror With Change Volumes
HU02434 All Critical An issue in the internal accounting of FlashCopy resources can lead to multiple node warmstarts taking a cluster offline (show details) 8.4.0.6 FlashCopy
HU02440 All Critical Using the migrateexts command when both source and target mdisks are unmanaged can trigger a Tier 2 recovery (show details) 8.4.0.6 Command Line Interface, Storage Virtualisation
HU02442 All Critical Issuing a lspotentialarraysize CLI command with an invalid drive class can trigger a Tier 2 recovery (show details) 8.4.0.6 Command Line Interface
HU02343 All High Importance For Huawei Dorado V3 Series backend controllers it is possible that not all available target ports will be utilized. This would reduce the potential IO throughput and can cause high read/write backend queue time on the cluster impacting front end latency for hosts (show details) 8.4.0.6 Backend Storage
HU02438 All High Importance Certain conditions can provoke a cache behaviour that unbalances workload distribution across CPU cores leading to performance impact (show details) 8.4.0.6 Cache
IT38015 All High Importance During RAID rebuild or copyback on systems with 16gb or less of memory, cache handling can lead to a deadlock which results in timeouts (show details) 8.4.0.6 RAID
HU02263 All Suggested The pool properties dialog in the GUI displays thin-provisioning savings, compression savings and total savings. In Data Reduction Pools, the thin-provisioning savings displayed are actually the total savings instead of the thin-provisioning savings only (show details) 8.4.0.6 Data Reduction Pools
HU02382 FS5100, FS7200, FS9100, FS9200, V5100, V7000 Suggested A complex interaction of tasks, including drive firmware cleanup and syslog reconfiguration, can cause a 10 second delay when each node unpends (eg during an upgrade) (show details) 8.4.0.6 System Update
HU02383 FS5100, FS7200, FS9100, FS9200, V7000 Suggested An additional 20 second IO delay can occur when a system update commits (show details) 8.4.0.6 System Update
HU02444 All Suggested Some security scanners can report unauthenticated targets against all the iSCSI IP addresses of a node (show details) 8.4.0.6 Hosts, iSCSI
HU02418 All HIPER During a DRAID array rebuild data can be written to an incorrect location. For more details refer to this Flash (show details) 8.4.0.5 Distributed RAID, RAID
HU02384 SVC HIPER An inter-node message queue can become stalled, leading to an I/O timeout warmstart, and temporary loss of access (show details) 8.4.0.4 Reliability Availability Serviceability
HU02400 All HIPER A problem in the virtualization component of the system can cause a migration IO to be submitted in an incorrect context resulting in a node warmstart. In some cases it is possible that this IO has been submitted to an incorrect location on the backend, which can cause data corruption of an isolated small area (show details) 8.4.0.4 Storage Virtualisation
DT112601 All Critical Deleting image mode mounted source volume while migration is ongoing could trigger Tier 2 recovery (show details) 8.4.0.4 Storage Virtualisation
HU02342 All Critical Occasionally when an offline drive returns to online state later than its peers in the same RAID array there can be multiple node warmstarts that send nodes into a service state (show details) 8.4.0.4 RAID
HU02393 All Critical Automatic resize of compressed/thin volumes may fail causing warmstarts on both nodes in an I/O group (show details) 8.4.0.4 Storage Virtualisation
HU02397 All Critical A Data Reduction Pool, with deduplication enabled, can retain some stale state after deletion and recreation. This has no immediate effect. However if later on a node goes offline this condition can cause the pool to be taken offline (show details) 8.4.0.4 Data Reduction Pools
HU02401 All Critical EasyTier can move extents between identical mdisks until one runs out of space (show details) 8.4.0.4 EasyTier
HU02406 All Critical An interoperability issue between Cisco NX-OS firmware and the Spectrum Virtualize Fibre Channel driver can cause a node warmstart on NPIV failback (for example during an upgrade) with the potential for a loss of access. For more details refer to this Flash (show details) 8.4.0.4 Interoperability
HU02414 All Critical Under specific sequence and timing of circumstances the garbage collection process can timeout and take a pool offline temporarily (show details) 8.4.0.4 Data Reduction Pools
HU02345 All High Importance When connectivity to nodes in a local or remote cluster is lost, inflight IO can become stuck in an aborting state, consuming system resources and potentially adversely impacting performance (show details) 8.4.0.4 HyperSwap, Metro Mirror
HU02388 FS5000, V5000 High Importance GUI can hang randomly due to an out of memory issue after running any task (show details) 8.4.0.4 Graphical User Interface
HU02422 All High Importance GUI performance can be degraded when displaying large numbers of volumes or other objects (show details) 8.4.0.4 Graphical User Interface
HU02306 All Suggested An offline host port can still be shown as active in lsfabric and the associated host can be shown as online despite being offline (show details) 8.4.0.4 Hosts
HU02405 FS5200 Suggested An issue in the zero detection of the new Message Passing (MP) functionality can cause thin volumes to allocate space when writing zeros (show details) 8.4.0.4 Inter-node messaging
HU02426 All Suggested Where an email server accepts the STARTTLS command during the initial handshake if TLS v1.2 is disables or not supported then the system will be unable to send email alerts (show details) 8.4.0.4 System Monitoring
IT37654 All Suggested When creating a new encrypted array the CMMVC8534E error (Node has insufficient entropy to generate key material) can appear preventing array creation (show details) 8.4.0.4 Encryption
HU02312 All HIPER Changing the preferred node for a volume when it is in a remote copy relationship can result in multiple node warmstarts. For more details refer to this Flash (show details) 8.4.0.3 Global Mirror, Global Mirror With Change Volumes, Metro Mirror
HU02340 All HIPER High replication workloads can cause multiple warmstarts with a loss of access at the partner cluster (show details) 8.4.0.3 IP Replication
HU02373 All Critical An incorrect compression flag in metadata can take a DRP offline (show details) 8.4.0.3 Data Reduction Pools
HU02319 All High Importance The GUI can become unresponsive (show details) 8.4.0.3 Graphical User Interface
HU02326 SVC High Importance Delays in passing messages between nodes in an I/O group can adversely impact write performance (show details) 8.4.0.3 Performance
HU02360 All High Importance Cloud Callhome may stop working and provide no indication of this in the event log. For more details refer to this Flash (show details) 8.4.0.3 System Monitoring
HU02362 FS5100, FS5200, FS7200, FS9100, FS9200, SVC, V5100, V7000 High Importance When the RAID scrub process encounters bad grains, the peak response time for reads and writes can be adversely impacted (show details) 8.4.0.3 RAID
HU02376 All High Importance FlashCopy maps may get stuck at 99% due to inconsistent metadata accounting between nodes (show details) 8.4.0.3 FlashCopy
HU02392 All High Importance Validation in the Upload Support Package feature will reject new case number formats in the PMR field (show details) 8.4.0.3 Support Data Collection
HU02325 All Suggested Tier 2 and Tier 3 recoveries can fail due to node warmstarts (show details) 8.4.0.3 Reliability Availability Serviceability
HU02331 All Suggested Due to a threshold issue an error code 3400 may appear too often in the event log (show details) 8.4.0.3 Compression
HU02332 & HU02336 All Suggested When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart (show details) 8.4.0.3 Hosts
HU02366 All Suggested Slow internal resource reclamation by the RAID component can cause a node warmstart (show details) 8.4.0.3 RAID
HU02375 All Suggested An issue in how the GUI handles volume data can adversely impact its responsiveness (show details) 8.4.0.3 Graphical User Interface
HU02381 All Suggested When the proxy server password is changed to one with more than 40 characters the config node will warmstart (show details) 8.4.0.3 Command Line Interface
HU02387 All Suggested When using the GUI the maximum Data Reduction Pools limitation incorrectly includes child pools (show details) 8.4.0.3 Data Reduction Pools
HU02425 All Suggested An issue in the handling of internal messages, when the system has a high IO workload to two or more different FlashCopy maps in the same dependency chain, can result in incorrect counters. The node will warmstart to clear this condition. (show details) 8.4.0.3 FlashCopy
HU02261 All HIPER A Data Reduction Pool may be taken offline when metadata is detected to hold an invalid compression flag. For more details refer to this Flash (show details) 8.4.0.2 Data Reduction Pools
HU02277 All HIPER RAID parity scrubbing can become stalled causing an accumulation of media errors leading to multiple drive failures with the possibility of data integrity loss. For more details refer to this Flash (show details) 8.4.0.2 RAID
HU02310 All HIPER Where a FlashCopy mapping exists between two volumes in the same Data Reduction Pool and the same I/O group, and the target volume has deduplication enabled, then the target may contain invalid data (show details) 8.4.0.2 Data Reduction Pools, FlashCopy, Global Mirror With Change Volumes
HU02313 FS5100, FS7200, FS9100, FS9200, V5100, V7000 HIPER When a FlashCore Module (FCM) fails there is a chance that this can trigger other FCMs in the same control enclosure to also fail. If enough additional drives fail, at the same time, this can take the array offline and cause a loss of access to data. For more details refer to this Flash (show details) 8.4.0.2 Drives
HU02338 All HIPER An issue in the setting up of reverse FlashCopy mappings can cause the background copy to finish prematurely providing an incomplete target image (show details) 8.4.0.2 FlashCopy
HU02282 All Critical After a code upgrade the config node may exhibit high write response times. In exceptionally rare circumstances an Mdisk group may be taken offline (show details) 8.4.0.2 Cache
HU02315 All Critical Failover for VMware iSER hosts may pause I/O for more than 120 seconds (show details) 8.4.0.2 Hosts
HU02321 All Critical Where nodes relying on RDMA clustering alone, if a node is removed, warmstarts or goes down for upgrade there may be a delay in internode communication leading to lease expiries (show details) 8.4.0.2 iSCSI
HU02429 All Critical System can go offline shortly after changing the SMTP settings using the chemailserver command via the GUI (show details) 8.4.0.2 System Monitoring
HU02201 & HU02221 All High Importance Shortly after upgrading drive firmware, specific drive models can fail due to Too many long IOs to drive for too long errors (show details) 8.4.0.2 Drives
HU02227 FS7200, FS9100, FS9200, SVC, V5100, V7000 High Importance Certain I/O patterns can cause compression hardware to post errors. When those errors exceed a threshold the node can be taken offline (show details) 8.4.0.2 Compression
HU02300 All High Importance Use of Enhanced Callhome in censored mode may lead to adverse performance around 02:00 (2AM) (show details) 8.4.0.2 System Monitoring
HU02301 SVC High Importance iSCSI hosts connected to iWARP 25G adapters may experience adverse performance impacts (show details) 8.4.0.2 iSCSI
HU02304 FS9100, V5100, V7000 High Importance Some RAID operations for certain NVMe drives may cause adverse I/O performance (show details) 8.4.0.2 RAID
HU02311 All High Importance An issue in volume copy flushing may lead to higher than expected write cache delays (show details) 8.4.0.2 Cache
HU02317 All High Importance A DRAID expansion can stall shortly after it is initiated (show details) 8.4.0.2 Distributed RAID
HU02095 All Suggested The effective_used_capacity field of lsarray/lsmdisk commands should be empty for RAID arrays which do not contain overprovisioned drives. However, sometimes this field can be zero even though it should be empty. This can cause incorrect provisioned capacity reporting in the GUI (show details) 8.4.0.2 Graphical User Interface
HU02280 All Suggested Spectrum Control or Storage Insights may be unable to collect stats after a Tier 2 recovery or system powerdown (show details) 8.4.0.2 System Monitoring
HU02291 All Suggested Internal counters for upper cache stage/destage I/O rates and latencies are not collected and zeroes are usually displayed (show details) 8.4.0.2 Cache, System Monitoring
HU02292 & HU02308 All Suggested The use of maximum replication delay within Global Mirror may occasionally cause a node warmstart (show details) 8.4.0.2 Global Mirror
HU02303 & HU02305 All Suggested Configuration node warmstart will occur if mkhostcluster is run with -ignoreseedvolume and the ignored volumes have an id greater than 256 (show details) 8.4.0.2 Hosts
HU02419 All Suggested During creation of a drive FRU id the resulting unique number can contain a space character which can lead to CLI commands, that return this value, presenting it as a truncated string (show details) 8.4.0.2 Command Line Interface, Drives
IT34949 All Suggested lsnodevpd may show DIMM information in the wrong positions (show details) 8.4.0.2 Command Line Interface, Graphical User Interface
HU02186 FS5100, FS7200, FS9100, FS9200, V5100, V7000 HIPER NVMe drive pulls or firmware upgrades may lead to offline pools with the possibility of a small loss of data integrity. For more details refer to this Flash (show details) 8.4.0.0 RAID
HU02327 All HIPER Using addvdiskcopy in conjunction with expandvdisk with format may result in the original being overwritten, by the new copy, producing blank copies. For more details refer to this Flash (show details) 8.4.0.0 Volume Mirroring
HU02058 All Critical Changing a remote copy relationship from GMCV to MM or GM can result in a Tier 2 recovery (show details) 8.4.0.0 Global Mirror, Global Mirror With Change Volumes, Metro Mirror
HU02092 All Critical The effectiveness of slow drain mitigation can become reduced causing fabric congestion to adversely impact all ports on an adapter (show details) 8.4.0.0 Reliability Availability Serviceability
HU02172 All Critical The CLI command lsdependentvdisks -enclosure X causes node warmstarts if no nodes are online in that enclosure (show details) 8.4.0.0 Command Line Interface
HU02184 All Critical When a 3PAR controller experiences a fault that prevents normal I/O processing it may issue a SCSI TARGET RESET command. This command is not supported and may cause multiple node asserts, possibly cluster-wide (show details) 8.4.0.0 Backend Storage
HU02196 & HU02253 All Critical A particular sequence of internode messaging delays can lead to a cluster wide lease expiry (show details) 8.4.0.0 Reliability Availability Serviceability
HU02210 All Critical There is a very small timing window where a volume may be reported as offline, to a host, during its conversion from a regular volume to a HyperSwap volume (show details) 8.4.0.0 HyperSwap
HU02213 SVC Critical A Hot Spare Node (HSN) timing window issue can, during an HSN activation or deactivation, cause the cluster to broadcast an invalid VPD update to other clusters on the SAN. This may trigger a Tier 2 recovery on the other cluster. For more details refer to this Flash (show details) 8.4.0.0 Hot Spare Node
HU02225 All Critical An issue in the Thin Provisioning feature can lead to multiple warmstarts with the possibility of a loss of access to data (show details) 8.4.0.0 Thin Provisioning
HU02230 FS7200, FS9100, FS9200, V7000 Critical For IBM Flash Core Modules a change of state, from unused to candidate, can lead to a Tier 2 recovery (show details) 8.4.0.0 Drives
HU02232 All Critical Forced removal of large volumes in FlashCopy mappings can cause multiple node warmstarts with the possibility of a loss of access (show details) 8.4.0.0 FlashCopy
HU02262 SVC, V5000, V7000 Critical Entering the CLI applydrivesoftware -cancel command may result in cluster-wide warmstarts (show details) 8.4.0.0 Drives
HU02266 All Critical An issue in auto-expand can cause expansion to fail and the volume to be taken offline (show details) 8.4.0.0 Thin Provisioning
HU02289 FS9200, SVC Critical An issue with internal resource allocation in high-end systems, with 1000s of mirror copies, may cause multiple warmstarts with the possibility of a loss of access (show details) 8.4.0.0 Volume Mirroring
HU02298 All Critical A high frequency of 1920 events and restarting of consistency groups may provoke a Tier 2 recovery (show details) 8.4.0.0 Global Mirror
HU02299 FS7200, FS9100, FS9200, V7000 Critical NVMe drives can become locked due to a missing encryption key condition (show details) 8.4.0.0 Drives
HU02314 FS5100, FS7200, FS9100, FS9200, V5100, V7000 Critical Due to a RAID issue when a bad block is detected on a NVMe drive there may be multiple node warmstarts with a possibility of a loss of access to data (show details) 8.4.0.0 Drives
HU02322 All Critical A deadlock condition in the Data Reduction Pool function may cause multiple node warmstarts and a temporary loss of access to data (show details) 8.4.0.0 Data Reduction Pools
HU02323 All Critical Stalled I/O during DRAID expansion can cause node warmstarts and a temporary loss of access to data (show details) 8.4.0.0 Distributed RAID
HU02390 All Critical A memory handling issue in the REST API may cause an out-of-memory condition when listing a large number of volumes (show details) 8.4.0.0 REST API
HU02467 All Critical When one node disappears from the cluster the surviving node can be unable to achieve quorum allegiance in a timely manner causing it to lease expire (show details) 8.4.0.0 Quorum
HU02153 All High Importance Fabric or host issues can cause aborted IOs to block the port throttle queue leading to adverse performance that is cleared by a node warmstart (show details) 8.4.0.0 Hosts
HU02156 All High Importance Global Mirror environments may experience more frequent 1920 events due to writedone message queuing (show details) 8.4.0.0 Global Mirror
HU02164 All High Importance An issue in Remote Copy may cause a loss of hardened data when a node is warmstarted (show details) 8.4.0.0 Global Mirror, Global Mirror With Change Volumes, Metro Mirror
HU02170 FS7200, FS9100, FS9200, V7000 High Importance During NVMe SSD firmware upgrade processes peak read latency may reach 10sec (show details) 8.4.0.0 RAID
HU02194 All High Importance Password reset via USB drive does not work as expected and user is not able to login to Management or Service assistant GUI with the new password (show details) 8.4.0.0 Reliability Availability Serviceability
HU02250 All High Importance Duplicate volume names may cause multiple asserts (show details) 8.4.0.0 Storage Virtualisation
II14767 SVC High Importance An issue with how cache handles ownership of volumes across multiple sites can lead to cross-site destage, adversely impacting write latency. For more details refer to this Flash (show details) 8.4.0.0 Cache
IT33734 V5000 High Importance Lower cache partitions may fill up even though higher destage rates are available (show details) 8.4.0.0 Cache
IT33868 FS9100, SVC, V7000 High Importance Non-FCM NVMe drives may exhibit high write response times with the Spectrum Protect Blueprint script (show details) 8.4.0.0 Drives
IT36619 All High Importance After a node warmstart, system CPU utilisation may show an increase (show details) 8.4.0.0 RAID
HU01238 All Suggested The mishandling of performance stats may occasionally result in some entries being overwritten (show details) 8.4.0.0 System Monitoring
HU01977 All Suggested CLI commands can produce a return code of 1 even though execution was successful (show details) 8.4.0.0 Command Line Interface
HU02139 FS5100, FS9100, V5100, V7000 Suggested When 32Gbps FC adapters are fitted the maximum supported ambient temperature is decreased leading to more threshold exceeded errors in the Event Log (show details) 8.4.0.0 System Monitoring
HU02142 All Suggested It is possible for a backend unmap process to become stalled, preventing system configuration changes from completing (show details) 8.4.0.0 Distributed RAID
HU02208 All Suggested An issue with the handling of files by quorum can lead to a node warmstart (show details) 8.4.0.0 Quorum
HU02239 All Suggested A rare race condition in the Xcopy function can cause a single node warmstart (show details) 8.4.0.0 Hosts
HU02241 All Suggested IP Replication can fail to create IP partnerships via the secondary cluster management IP (show details) 8.4.0.0 IP Replication
HU02245 All Suggested First support data collection fails to upload successfully (show details) 8.4.0.0 Support Data Collection
HU02251 All Suggested A warmstart may occur when a node receives iSCSI host login/logout requests out of sequence (show details) 8.4.0.0 Hosts, iSCSI
HU02255 All Suggested A timing issue in the processing of login requests can cause a single node warmstart (show details) 8.4.0.0 Command Line Interface, Graphical User Interface
HU02265 All Suggested Enhanced inventory can sometimes be missing from callhome data due to the lsfabric command timing out (show details) 8.4.0.0 Support Data Collection
HU02267 All Suggested After upgrade it is possible for a node IP address to become duplicated with the cluster IP address and access to the config node to be lost as a consequence (show details) 8.4.0.0 Command Line Interface, Graphical User Interface
HU02334 All Suggested Node to node connectivity issues may trigger repeated logins/logouts resulting in a single node warmstart (show details) 8.4.0.0 Reliability Availability Serviceability
HU02353 All Suggested The GUI will refuse to start a GMCV relationship if one of the change volumes has an ID of 0 (show details) 8.4.0.0 Global Mirror With Change Volumes, Graphical User Interface
HU02358 All Suggested An issue in Remote Copy, that stalls a switch of direction, can cause I/O timeouts leading to a node warmstart (show details) 8.4.0.0 Global Mirror, Global Mirror With Change Volumes, Metro Mirror
HU02364 All Suggested False 989001 Managed Disk Group space warnings can be generated (show details) 8.4.0.0 System Monitoring
HU02424 All Suggested Frequent GUI refreshing adversely impacts usability on some screens (show details) 8.4.0.0 Graphical User Interface
IT32338 All Suggested Testing LDAP Authentication fails if username & password are supplied (show details) 8.4.0.0 LDAP

4. Useful Links

Description Link
Support Websites
Update Matrices, including detailed build version
Support Information pages providing links to the following information:
  • Interoperability information
  • Product documentation
  • Limitations and restrictions, including maximum configuration limits
Spectrum Virtualize Family of Products Inter-System Metro Mirror and Global Mirror Compatibility Cross Reference
Software Upgrade Test Utility
Software Upgrade Planning