This is the release note for the 8.4.2 release and details the issues resolved in all Program Temporary Fixes (PTFs) between 8.4.2.0 and 8.4.2.1. This document will be updated with additional information whenever a PTF is released.
Note: This release is a Non-Long Term Support (Non-LTS) release. Non-LTS code levels are not intended to receive any PTFs. If issues are encountered the only resolution is likely to be upgrade to a later LTS or Non-LTS release.
For details of the new Continuous Development release strategy refer to the Spectrum Virtualize Family of Products Upgrade Planning page.
This document was last updated on 12 March 2025.
Note: Detailed build version numbers are included in the Update Matrices in the Useful Links section.
Note: The following functionality has been removed in the 8.4.2 release:
Details | Introduced |
---|---|
8.4.2.0 introduces Ethernet Portsets, which changes the way IP addresses are configured. The CIM (Common Information Model) interface can no longer configure IP addresses, and the CSI and Cinder interfaces require an ifix to work correctly. Scripts that use the CLI or REST API may also require changes. The System Center Operations Manager (SCOM) management pack does not support monitoring of iSCSI port information at 8.4.2. Refer to this page for details. This is a known issue that may be lifted in a future release. |
8.4.2.0 |
Customers using Microsoft Offload Data Transfer (ODX) should not upgrade to v8.4.2. This issue may be resolved by a future release. |
8.4.2.0 |
Customers using Spectrum Control v5.4.3 or earlier may notice that IP port status is incorrectly shown as "Unconfigured". This issue will be resolved by a future release of Spectrum Control. |
8.4.2.0 |
Due to a known issue which may occur following a cluster outage while a DRAID1 array is expanding, expansion of DRAID1 arrays is not supported on 8.4.0 and higher. This is a known issue that will be lifted in a future PTF. The fix can be tracked using APAR SVAPAR-132123. |
8.4.0.0 |
There is an existing limit on the number of files that can be returned by the CLI of approximately 780 entries. In many configurations this limit is of no concern. However, due to a problem with hot-spare node IO stats files, 8-node clusters with many hardware upgrades or multiple spare nodes may see up to 900 IO stats files. As a consequence the data collector for Storage Insights and Spectrum Control cannot list or download the required set of performance statistics data. The result is that there are many gaps in the performance data, leading to errors with the performance monitoring tools and a lack of performance history. The workaround is to remove the files associated with spare nodes or previously/updated hardware using the cleardumps command (or to cleardumps the entire iostats directory). This is a known issue that will be lifted in a future release. The fix can be tracked using APAR HU02403. |
8.4.0.0 |
Systems, with NPIV enabled, presenting storage to SUSE Linux Enterprise Server (SLES) or Red Hat Enterprise Linux (RHEL) hosts running the ibmvfc driver on IBM Power can experience path loss or read-only file system events. This is cause by issues within the ibmvfc driver and VIOS code. Refer to this troubleshooting page for more information. |
n/a |
If an update stalls or fails then contact IBM Support for further assistance | n/a |
The following restrictions were valid but have now been lifted | |
The CLI command 'lsportip' was removed in 8.4.2.0 and replaced with a new command 'lsip'. This will impact interoperability with any tools that rely on lsportip. This change prevents Veeam from working correctly with Spectrum Virtualize systems running 8.4.2 or higher, until Veeam release a new version. This issue has now been resolved as Veeam Backup and Replication Version 12 no longer has this restriction |
8.4.2.0 |
A release may contain fixes for security issues, fixes for APARs or both. Consult both tables below to understand the complete set of fixes included in the release.
CVE Identifier |
Link for additional Information |
Resolved in |
---|---|---|
| ||
CVE-2021-42340 | 6541270 | 8.4.2.1 |
CVE-2021-29873 | 6497111 | 8.4.2.0 |
CVE-2020-10732 | 6497113 | 8.4.2.0 |
CVE-2020-10774 | 6497113 | 8.4.2.0 |
CVE-2021-33037 | 6497115 | 8.4.2.0 |
APAR |
Affected Products |
Severity |
Description |
Resolved in |
Feature Tags |
---|---|---|---|---|---|
|
|
| |||
HU02418 | All | HIPER | During a DRAID array rebuild data can be written to an incorrect location. For more details refer to this Flash (show details) | 8.4.2.1 | Distributed RAID, RAID |
HU02406 | All | Critical | An interoperability issue between Cisco NX-OS firmware and the Spectrum Virtualize Fibre Channel driver can cause a node warmstart on NPIV failback (for example during an upgrade) with the potential for a loss of access. For more details refer to this Flash (show details) | 8.4.2.1 | Interoperability |
HU02421 | All | Critical | A logic fault in the socket communication sub-system can cause multiple node warmstarts when more than 8 external clients attempt to connect. It is possible for this to lead to a loss of access (show details) | 8.4.2.1 | Reliability Availability Serviceability |
HU02430 | All | Critical | Expanding or shrinking the real size of FlashCopy target volumes can cause recurring node warmstarts and may cause nodes to revert to candidate state (show details) | 8.4.2.1 | FlashCopy |
HU02435 | All | Critical | The removal of deduplicated volumes can cause repeated node warmstarts and the possibility of offline Data Reduction Pools (show details) | 8.4.2.1 | Data Reduction Pools |
HU02441 & HU02486 | All | Critical | Safeguarded Copy with DRP can cause node warmstarts and mdisk timeouts (show details) | 8.4.2.1 | Data Reduction Pools, Safeguarded Copy & Safeguarded Snapshots |
HU02296 | All | HIPER | The zero page functionality can become corrupt causing a volume to be initialised with non-zero data (show details) | 8.4.2.0 | Storage Virtualisation |
HU02384 | SVC | HIPER | An inter-node message queue can become stalled, leading to an I/O timeout warmstart, and temporary loss of access (show details) | 8.4.2.0 | Reliability Availability Serviceability |
DT112601 | All | Critical | Deleting image mode mounted source volume while migration is ongoing could trigger Tier 2 recovery (show details) | 8.4.2.0 | Storage Virtualisation |
HU02217 | All | Critical | Incomplete re-synchronisation following a Tier 3 recovery can lead to RAID inconsistencies (show details) | 8.4.2.0 | RAID |
HU02295 | SVC | Critical | When upgrading from v8.2.1 or v8.3, in the presence of hot spare nodes, an issue with the handling of node metadata may cause a Tier 2 recovery (show details) | 8.4.2.0 | System Update |
HU02309 | All | Critical | Due to a change in how FlashCopy and remote copy interact, multiple warmstarts may occur with the possibility of lease expiries (show details) | 8.4.2.0 | Global Mirror With Change Volumes |
HU02328 | FS5100, FS7200, FS9100, FS9200, V5100, V7000 | Critical | Due to an issue with the handling of NVMe registration keys, changing the node WWNN in an active system will cause a lease expiry (show details) | 8.4.2.0 | NVMe |
HU02349 | All | Critical | Using an incorrect FlashCopy consistency group id to stop consistency group will result in T2 recovery if the incorrect id is >501 (show details) | 8.4.2.0 | FlashCopy |
HU02368 | All | Critical | When consistency groups from code levels prior to v8.3 are carried through to v8.3 or later then there can be multiple node warmstarts with the possibility of a loss of access (show details) | 8.4.2.0 | HyperSwap |
HU02373 | All | Critical | An incorrect compression flag in metadata can take a DRP offline (show details) | 8.4.2.0 | Data Reduction Pools |
HU02378 | All | Critical | Multiple maximum replication delay events and Remote Copy relationship restarts can cause multiple node warmstarts with the possibility of a loss of access (show details) | 8.4.2.0 | Global Mirror, Global Mirror With Change Volumes, Metro Mirror |
HU02393 | All | Critical | Automatic resize of compressed/thin volumes may fail causing warmstarts on both nodes in an I/O group (show details) | 8.4.2.0 | Storage Virtualisation |
HU02397 | All | Critical | A Data Reduction Pool, with deduplication enabled, can retain some stale state after deletion and recreation. This has no immediate effect. However if later on a node goes offline this condition can cause the pool to be taken offline (show details) | 8.4.2.0 | Data Reduction Pools |
HU02410 | SVC | Critical | A timing window issue in the transition to a spare node can cause a cluster-wide Tier 2 recovery (show details) | 8.4.2.0 | Hot Spare Node |
HU02414 | All | Critical | Under specific sequence and timing of circumstances the garbage collection process can timeout and take a pool offline temporarily (show details) | 8.4.2.0 | Data Reduction Pools |
HU02423 | All | Critical | Volume copies may be taken offline even though there is sufficient free capacity (show details) | 8.4.2.0 | Data Reduction Pools |
HU02088 | All | High Importance | There can be multiple node warmstarts when no mailservers are configured (show details) | 8.4.2.0 | System Monitoring |
HU02127 | All | High Importance | 32Gbps FC ports will auto-negotiate to 8Gbps, if they are connected to a 16Gbps Cisco switch port (show details) | 8.4.2.0 | Performance |
HU02273 | All | High Importance | When write I/O workload to a HyperSwap volume site reaches a certain thresholds, the system should switch the primary and secondary copies. There are circumstances where this will not happen (show details) | 8.4.2.0 | HyperSwap |
HU02297 | All | High Importance | Error handling for a failing backend controller can lead to multiple warmstarts (show details) | 8.4.2.0 | Backend Storage |
HU02345 | All | High Importance | When connectivity to nodes in a local or remote cluster is lost, inflight IO can become stuck in an aborting state, consuming system resources and potentially adversely impacting performance (show details) | 8.4.2.0 | HyperSwap, Metro Mirror |
HU02388 | FS5000, V5000 | High Importance | GUI can hang randomly due to an out of memory issue after running any task (show details) | 8.4.2.0 | Graphical User Interface |
HU02422 | All | High Importance | GUI performance can be degraded when displaying large numbers of volumes or other objects (show details) | 8.4.2.0 | Graphical User Interface |
IT40370 | FS5200 | High Importance | An issue in the PCI fault recovery mechanism may cause a node to constantly reboot (show details) | 8.4.2.0 | Reliability Availability Serviceability |
HU02171 | All | Suggested | The timezone for Iceland is set incorrectly (show details) | 8.4.2.0 | Support Data Collection |
HU02174 | All | Suggested | A timing window issue related to remote copy memory allocation can result in a node warmstart (show details) | 8.4.2.0 | Global Mirror, Global Mirror With Change Volumes, Metro Mirror |
HU02243 | All | Suggested | DMP for 1670 event (replace CMOS) will shutdown a node without confirmation from user (show details) | 8.4.2.0 | GUI Fix Procedure |
HU02263 | All | Suggested | The pool properties dialog in the GUI displays thin-provisioning savings, compression savings and total savings. In Data Reduction Pools, the thin-provisioning savings displayed are actually the total savings instead of the thin-provisioning savings only (show details) | 8.4.2.0 | Data Reduction Pools |
HU02274 | All | Suggested | Due to a timing issue in how events are handled an active quorum loss and re-acquisition cycle can be triggered with a 3124 error (show details) | 8.4.2.0 | Quorum |
HU02306 | All | Suggested | An offline host port can still be shown as active in lsfabric and the associated host can be shown as online despite being offline (show details) | 8.4.2.0 | Hosts |
HU02346 | All | Suggested | A mismatch between LBA stored by snapshot and disk allocator processes in the thin-provisioning component may cause a single node warmstart (show details) | 8.4.2.0 | Thin Provisioning |
HU02366 | All | Suggested | Slow internal resource reclamation by the RAID component can cause a node warmstart (show details) | 8.4.2.0 | RAID |
HU02367 | All | Suggested | An issue with how RAID handles drive failures may lead to a node warmstart (show details) | 8.4.2.0 | RAID |
HU02381 | All | Suggested | When the proxy server password is changed to one with more than 40 characters the config node will warmstart (show details) | 8.4.2.0 | Command Line Interface |
HU02382 | FS5100, FS7200, FS9100, FS9200, V5100, V7000 | Suggested | A complex interaction of tasks, including drive firmware cleanup and syslog reconfiguration, can cause a 10 second delay when each node unpends (eg during an upgrade) (show details) | 8.4.2.0 | System Update |
HU02383 | FS5100, FS7200, FS9100, FS9200, V7000 | Suggested | An additional 20 second IO delay can occur when a system update commits (show details) | 8.4.2.0 | System Update |
HU02385 | All | Suggested | Unexpected emails from Inventory Script can be found on mailserver (show details) | 8.4.2.0 | System Monitoring |
HU02386 | FS5100, FS7200, FS9100, FS9200, V7000 | Suggested | Enclosure fault LED can remain on due to race condition when location LED state is changed (show details) | 8.4.2.0 | System Monitoring |
HU02405 | FS5200 | Suggested | An issue in the zero detection of the new Message Passing (MP) functionality can cause thin volumes to allocate space when writing zeros (show details) | 8.4.2.0 | Inter-node messaging |
HU02411 | FS5100, FS7200, FS9100, FS9200, V5100, V7000 | Suggested | An issue in the NVMe drive presence checking can result in a node warmstart (show details) | 8.4.2.0 | NVMe |
HU02419 | All | Suggested | During creation of a drive FRU id the resulting unique number can contain a space character which can lead to CLI commands, that return this value, presenting it as a truncated string (show details) | 8.4.2.0 | Command Line Interface, Drives |
HU02425 | All | Suggested | An issue in the handling of internal messages, when the system has a high IO workload to two or more different FlashCopy maps in the same dependency chain, can result in incorrect counters. The node will warmstart to clear this condition. (show details) | 8.4.2.0 | FlashCopy |
HU02426 | All | Suggested | Where an email server accepts the STARTTLS command during the initial handshake if TLS v1.2 is disables or not supported then the system will be unable to send email alerts (show details) | 8.4.2.0 | System Monitoring |
IT33996 | All | Suggested | An issue in RAID where unreserved resources fail to be freed up can result in a node warmstart (show details) | 8.4.2.0 | RAID |
IT34958 | All | Suggested | During a system update a node returning to the cluster, after upgrade, may warmstart (show details) | 8.4.2.0 | System Update |
IT37654 | All | Suggested | When creating a new encrypted array the CMMVC8534E error (Node has insufficient entropy to generate key material) can appear preventing array creation (show details) | 8.4.2.0 | Encryption |
IT38858 | All | Suggested | Unable to resume Enable USB Encryption wizard via the GUI. The GUI will display error CMMVC9231E (show details) | 8.4.2.0 | Graphical User Interface |
Description | Link |
---|---|
Support Websites | |
Update Matrices, including detailed build version | |
Support Information pages providing links to the following information:
|
|
Spectrum Virtualize Family of Products Inter-System Metro Mirror and Global Mirror Compatibility Cross Reference | |
Software Upgrade Test Utility | |
Software Upgrade Planning |