Release Note for systems built with IBM Spectrum Virtualize


This is the release note for the 8.5 release and details the issues resolved in all Program Temporary Fixes (PTFs) between 8.5.0.0 and 8.5.0.13. This document will be updated with additional information whenever a PTF is released.

This document was last updated on 29 October 2024.

  1. New Features
  2. Known Issues and Restrictions
  3. Issues Resolved
    1. Security Issues Resolved
    2. APARs Resolved
  4. Useful Links

Note. Detailed build version numbers are included in the Update Matrices in the Useful Links section.


1. New Features

The following new features have been introduced in the 8.5.0.12 PTF release: The following new features have been introduced in the 8.5.0 release: The following features were first introduced in Non-LTS release 8.4.2.0: The following features were first introduced in Non-LTS release 8.4.1.0:

2. Known Issues and Restrictions

Note: For clarity, the terms "node" and "canister" are used interchangeably.
Details Introduced

The default delay for upgrade between nodes in an I/O group has changed from 30 minutes to 10 minutes, in order to reduce the overall upgrade duration.

This default can be modified in the GUI, or using the "-delay" option on the applysoftware CLI command.

8.5.0.0

Customers using Spectrum Control v5.4.3 or earlier may notice that IP port status is incorrectly shown as "Unconfigured".

This issue will be resolved by a future release of Spectrum Control.

8.4.2.0

Customers planning to upgrade to v8.4.0 or later should be aware that an update to OpenSSH has terminated support for all DSA keys. An update to OpenSSL has also terminated support for 1024-bit RSA keys used in SSL certificates.

Customers currently using DSA public keys for SSH access will need to generate new keys using alternative ciphers, such as RSA or ECDSA. If using RSA public keys for SSH access, it is recommended to use keys of 2048 bits or longer.

Customers currently using 1024-bit RSA keys in SSL certificates will need to generate new SSL certificates using 2048-bit RSA keys or ECDSA keys. This applies not only to the system certificate, but also to any SSL certificates used by external services such as LDAP servers.

8.4.0.0

There is an existing limit on the number of files that can be returned by the CLI of approximately 780 entries. In many configurations this limit is of no concern. However, due to a problem with hot-spare node I/O stats files, 8-node clusters with many hardware upgrades or multiple spare nodes may see up to 900 I/O stats files. As a consequence the data collector for Storage Insights and Spectrum Control cannot list or download the required set of performance statistics data. The result is that there are many gaps in the performance data, leading to errors with the performance monitoring tools and a lack of performance history.

The workaround is to remove the files associated with spare nodes or previously/updated hardware using the cleardumps command (or to cleardumps the entire iostats directory).

This is a known issue that will be lifted in a future PTF. The fix can be tracked using APAR HU02403.

8.4.0.0

Systems, with NPIV enabled, presenting storage to SUSE Linux Enterprise Server (SLES) or Red Hat Enterprise Linux (RHEL) hosts running the ibmvfc driver on IBM Power can experience path loss or read-only file system events.

This is cause by issues within the ibmvfc driver and VIOS code.

Refer to this troubleshooting page for more information.

n/a
If an update stalls or fails then contact IBM Support for further assistance n/a
The following restrictions were valid but have now been lifted

Systems that are using an IP partnership on 8.4.x or earlier, then upgrade to 8.5.x or later may suffer a loss of access when removing the IP partnership.

This restriction has been resolved in 8.5.0.6 under APAR HU02513.

8.5.0.0

The GUI does not allow the expansion of HyperSwap volumes. Customers wishing to perform this operation should use the expandvolume CLI command.

This restriction has been resolved in 8.5.0.6 under APAR HU02487.

8.5.0.0

The CLI command 'lsportip' was removed in 8.4.2.0 and replaced with a new command 'lsip'. This will impact interoperability with any tools that rely on lsportip.

This change prevents Veeam from working correctly with Spectrum Virtualize systems running 8.4.2 or higher, until Veeam release a new version.

This issue has now been resolved as Veeam Backup and Replication Version 12 no longer has this restriction

8.4.2.0

3. Issues Resolved

This release contains all of the fixes included in the 8.4.0.0 release, plus the following additional fixes.

A release may contain fixes for security issues, fixes for APARs or both. Consult both tables below to understand the complete set of fixes included in the release.

3.1 Security Issues Resolved

Security issues are documented using a reference number provided by "Common Vulnerabilities and Exposures" (CVE).
CVE Identifier
Link for additional Information
Resolved in
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
CVE-2023-1073 7161786 8.5.0.13
CVE-2023-45871 7161786 8.5.0.13
CVE-2023-6356 7161786 8.5.0.13
CVE-2023-6535 7161786 8.5.0.13
CVE-2023-6536 7161786 8.5.0.13
CVE-2023-1206 7161786 8.5.0.13
CVE-2023-5178 7161786 8.5.0.13
CVE-2024-2961 7161779 8.5.0.13
CVE-2023-50387 7161793 8.5.0.13
CVE-2023-50868 7161793 8.5.0.13
CVE-2020-28241 7161793 8.5.0.13
CVE-2023-4408 7161793 8.5.0.13
CVE-2023-48795 7154643 8.5.0.12
CVE-2024-20952 7156536 8.5.0.12
CVE-2024-20918 7156536 8.5.0.12
CVE-2024-20921 7156536 8.5.0.12
CVE-2024-20919 7156536 8.5.0.12
CVE-2024-20926 7156536 8.5.0.12
CVE-2024-20945 7156536 8.5.0.12
CVE-2023-33850 7156536 8.5.0.12
CVE-2024-23672 7156538 8.5.0.12
CVE-2024-24549 7156538 8.5.0.12
CVE-2023-44487 7156535 8.5.0.12
CVE-2023-1667 7156535 8.5.0.12
CVE-2023-2283 7156535 8.5.0.12
CVE-2023-50164 7114768 8.5.0.12
CVE-2023-46589 7114769 8.5.0.11
CVE-2023-45648 7114769 8.5.0.11
CVE-2023-42795 7114769 8.5.0.11
CVE-2024-21733 7114769 8.5.0.11
CVE-2023-22081 7114770 8.5.0.11
CVE-2023-22067 7114770 8.5.0.11
CVE-2023-5676 7114770 8.5.0.11
CVE-2023-43042 7064976 8.5.0.10
CVE-2023-34396 7065010 8.5.0.10
CVE-2023-21930 7065011 8.5.0.10
CVE-2023-21937 7065011 8.5.0.10
CVE-2023-21938 7065011 8.5.0.10
CVE-2023-27870 6985697 8.5.0.8
CVE-2023-30441 6987769 8.5.0.7
CVE-2022-21626 6858041 8.5.0.7
CVE-2022-1012 6858043 8.5.0.7
CVE-2021-45485 6858043 8.5.0.7
CVE-2021-45486 6858043 8.5.0.7
CVE-2022-43873 6858047 8.5.0.7
CVE-2022-42252 6858039 8.5.0.7
CVE-2022-43870 6858045 8.5.0.7
CVE-2022-0778 6622017 8.5.0.1
CVE-2021-35603 6622019 8.5.0.1
CVE-2021-35550 6622019 8.5.0.1
CVE-2021-38969 6584337 8.5.0.0
CVE-2021-42340 6541270 8.5.0.0
CVE-2021-29873 6497111 8.5.0.0
CVE-2020-10732 6497113 8.5.0.0
CVE-2020-10774 6497113 8.5.0.0
CVE-2021-33037 6497115 8.5.0.0

3.2 APARs Resolved

Show details for all APARs
APAR
Affected Products
Severity
Description
Resolved in
Feature Tags
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
SVAPAR-116592 All HIPER If a V5000E or a Flashsystem 5000 is configured with multiple compressed IP partnerships, and one or more of the partnerships is with a non V5000E or FS5000, it may repeatedly warmstart due to a lack of compression resources. (show details) 8.5.0.12 IP Replication
SVAPAR-132123 All HIPER Vdisks can go offline after a T3 with an expanding DRAID1 array evokes some IO errors and data corruption (show details) 8.5.0.12 RAID
HU02585 All Critical An unstable connection between the Storage Virtualize system and an external virtualized storage system can sometimes result in a cluster recovery occurring (show details) 8.5.0.12 Backend Storage
SVAPAR-115136 FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500, V7000 Critical Failure of an NVMe drive has a small probability of triggering a PCIe credit timeout in a node canister, causing the node to reboot. (show details) 8.5.0.12 Drives
SVAPAR-128052 All Critical A node assert may occur if a host sends a login request to a node when the host is being removed from the cluster with the '-force' parameter. (show details) 8.5.0.12 Hosts, NVMe
SVAPAR-128379 All Critical When collecting the debug data from a 16Gb or 32Gb Fibre Channel adapter, node warmstarts may occur, due to the firmware dump file exceeding the maximum size. (show details) 8.5.0.12 Reliability Availability Serviceability
SVAPAR-88887 FS9100, FS9200, FS9500 Critical Loss of access to data after replacing all boot drives in system (show details) 8.5.0.12 Drives, Reliability Availability Serviceability
HU02219 All High Importance Certain tier 1 flash drives report 'SCSI check condition: Aborted command' events (show details) 8.5.0.12 Drives
SVAPAR-104250 All High Importance There is an issue whereby NVMe CaW (Compare and Write) commands can incorrectly go into an invalid state, thereby causing the node to assert to clear the bad condition (show details) 8.5.0.12 Hosts, NVMe
SVAPAR-108715 All High Importance The Service Assistant GUI on 8.5.0.0 and above incorrectly performs actions on the local node instead of the node selected in the GUI. (show details) 8.5.0.12 Graphical User Interface
SVAPAR-110765 All High Importance In a 3-Site configuration, the Config node can be lost if the 'stopfcmap or 'stopfcconsistgrp ' commands are run with the '-force' parameter (show details) 8.5.0.12 3-Site using HyperSwap or Metro Mirror
SVAPAR-111996 FS9500, SVC High Importance After upgrading to a level which contains new battery firmware, the battery may be offline after the upgrade. (show details) 8.5.0.12 Reliability Availability Serviceability, System Update
SVAPAR-127063 All High Importance Degraded Remote Copy performance on systems with multiple IO groups running 8.5.0.11 or 8.6.0.3 after a node restarts (show details) 8.5.0.12 Global Mirror, Global Mirror With Change Volumes, HyperSwap, Metro Mirror, Performance
SVAPAR-127841 All High Importance A slow I/O resource leak may occur when using FlashCopy, and the system is under high workload. This may cause a node warmstart to occur (show details) 8.5.0.12 FlashCopy
SVAPAR-128228 All High Importance The NTP daemon may not synchronise after upgrading from 8.3.x to 8.5.x (show details) 8.5.0.12
SVAPAR-93054 All High Importance Backend systems on 8.2.1 and beyond have an issue that causes capacity information updates to stop after a T2 or T3 is performed. This affects all backend systems with FCM arrays (show details) 8.5.0.12 Backend Storage
SVAPAR-93309 All High Importance A node may briefly go offline after a battery firmware update (show details) 8.5.0.12 System Update
SVAPAR-99537 All High Importance If a hyperswap volume copy is created in a DRP child pool, and the parent pool has FCM storage, the change volumes will be created as thin-provisioned instead of compressed (show details) 8.5.0.12 Data Reduction Pools
HU02462 All Suggested A node can warm start when a FlashCopy volume is flushing, quiesces and has pinned data (show details) 8.5.0.12 FlashCopy
HU02591 All Suggested Multiple node asserts can occur when running commands with the 'preferred node' filter during an upgrade to 8.5.0.0 and above. (show details) 8.5.0.12 Inter-node messaging
SVAPAR-111021 All Suggested Unable to load resource page in GUI if the IO group ID:0 does not have any nodes. (show details) 8.5.0.12 System Monitoring
SVAPAR-120399 All Suggested A host WWPN incorrectly shows as being still logged into the storage when it is not. (show details) 8.5.0.12 Reliability Availability Serviceability
SVAPAR-120610 All Suggested Excessive 'chfcmap' commands can result in multiple node warmstarts occurring (show details) 8.5.0.12 FlashCopy
SVAPAR-120639 All Suggested The vulnerability scanner claims cookies were set without HttpOnly flag. (show details) 8.5.0.12
SVAPAR-122411 All Suggested A node may assert when a vdisk has been expanded and rehome has not made aware of the possible change of the number of regions it may have to rehome. (show details) 8.5.0.12 Data Reduction Pools
SVAPAR-123644 All Suggested A system with NVMe drives may falsely log an error indicating a Flash drive has high write endurance usage. The error cannot be cleared. (show details) 8.5.0.12 Reliability Availability Serviceability
SVAPAR-126742 All Suggested A 3400 error (too many compression errors) may be logged incorrectly, due to an incorrect threshold. The error can be ignored on code levels which do not contain this fix. (show details) 8.5.0.12 Compression, Data Reduction Pools
SVAPAR-127908 All Suggested A volume mapped to a NVMe host cannot be mapped to another NVMe host via the GUI, however it is possible via the CLI. In addition, when a host is removed from a host cluster, it is not possible to add it back using the GUI (show details) 8.5.0.12 GUI Fix Procedure, Graphical User Interface, Host Cluster, Hosts, NVMe
SVAPAR-85640 FS5000, FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500 Suggested If new nodes/iogroups are added to an SVC cluster that is virtualizing a clustered SpecV system, an attempt to add the SVC node host objects to a host cluster on the backend SpecV system will fail with CLI error code CMMVC8278E due to incorrect policing (show details) 8.5.0.12 Host Cluster
SVAPAR-85658 All Suggested When replacing a boot drive, the new drive needs to be synchronized with the existing drive. The command to do this appears to run and does not return an error, but the new drive does not actually get synchronized. (show details) 8.5.0.12 Reliability Availability Serviceability
SVAPAR-98611 All Suggested The system returns an incorrect retry delay timer for a SCSI BUSY status response to AIX hosts when an attempt is made to access a VDisk that is not mapped to the host (show details) 8.5.0.12 Interoperability
SVAPAR-107547 All Critical If there are more than 64 logins to a single Fibre Channel port, and a switch zoning change is made, a single node warmstart may occur. (show details) 8.5.0.11 Reliability Availability Serviceability
SVAPAR-107734 All Critical When issuing IO to an incremental fcmap volume that is in a stopped state, but has recently been expanded, and it also has a partner fcmap, may cause the nodes to go into a restart. (show details) 8.5.0.11 FlashCopy
SVAPAR-112107 FS9500, SVC Critical There is an issue that affects PSU firmware upgrades in FS9500 and SV3 systems that can cause an outage. This happens when one PSU fails to download the firmware and another PSU starts to download the firmware. It is a very rare timing window that can be triggered if two PSUs are reseated close in time during the firmware upgrade process. (show details) 8.5.0.11 System Update
SVAPAR-112707 SVC Critical Marking error 3015 as fixed on a SVC cluster containing SV3 nodes may cause a loss of access to data. For more details refer to this Flash (show details) 8.5.0.11 Reliability Availability Serviceability
SVAPAR-110234 FS5000, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500, SVC High Importance A single node warmstart can occur due to fibre channel adapter resource contention during 'chpartnership -stop' or 'mkfcpartnership' actions (show details) 8.5.0.11
SVAPAR-112525 All High Importance A node assert can occur due to a resource allocation issue in a small timing window when using Remote Copy (show details) 8.5.0.11 Global Mirror, Global Mirror With Change Volumes, HyperSwap, Metro Mirror
SVAPAR-117318 All High Importance A faulty SFP in a 32Gb Fibre Channel adapter may cause a single node warmstart, instead of reporting the port as failed. (show details) 8.5.0.11 Reliability Availability Serviceability
SVAPAR-108551 All Suggested An expired token in the GUI file upload process can cause the upgrade to not start automatically after the file is successfully uploaded. (show details) 8.5.0.11 System Update
SVAPAR-112711 All Suggested IBM Storage Virtualize user interface code will not respond to a malformed HTTP POST with expected HTTP 401 message. (show details) 8.5.0.11 Graphical User Interface
SVAPAR-117179 All Suggested Snap data collection does not collect an error log if the superuser password requires a change (show details) 8.5.0.11 Support Data Collection
SVAPAR-100127 All Critical The Service Assistant GUI Node rescue option incorrectly performs the node rescue on the local node instead of the node selected in the GUI. (show details) 8.5.0.10 Graphical User Interface
SVAPAR-104533 All Critical Systems that encounter multiple node asserts, followed by a system T3 recovery, may experience errors repairing Data Reduction Pools (show details) 8.5.0.10 Data Reduction Pools
SVAPAR-91860 All Critical If an upgrade is started with the pause flag and then aborted, the pause flag may not be cleared. This can trigger the system to encounter an unexpected code path on the next upgrade, thereby causing a loss of access to data (show details) 8.5.0.10 System Update
HU02539 All High Importance If an IP address is moved to a different port on a node, the old routing table entries do not get refreshed. Therefore, the IP address maybe inaccessible through the new port (show details) 8.5.0.10
HU02573 All High Importance HBA firmware can cause a port to appear to be flapping. The port will not work again until the HBA is restarted by rebooting the node. (show details) 8.5.0.10 Fibre Channel, Hosts
SVAPAR-100162 All High Importance Some host vendors such as Windows have recently started to use 'mode select page 7'. IBM Storage does not support this mode. If the storage received this mode level it would cause a warmstart to occur (show details) 8.5.0.10 Hosts
SVAPAR-100977 All High Importance When a zone containing NVMe devices is enabled, a node warmstart might occur. (show details) 8.5.0.10 NVMe
SVAPAR-105727 All High Importance An upgrade within the 8.5.0 release stream from 8.5.0.5 or below, to 8.5.0.6 or above, can cause an assert of down-level nodes during the upgrade, if volume mirroring is heavily utilised (show details) 8.5.0.10 Volume Mirroring
SVAPAR-94686 All High Importance The GUI can become slow and unresponsive due to a steady stream of configuration updates such as 'svcinfo' queries for the latest configuration data (show details) 8.5.0.10 Graphical User Interface
SVAPAR-99175 All High Importance A node may warmstart due to an invalid queuing mechanism in cache. This can cause IO in cache to be in the same processing queue more than once. (show details) 8.5.0.10 Cache
SVAPAR-99273 All High Importance If a SAN switch's Fabric Controller issues an abort (ABTS) command, and then issues an RSCN command before the abort has completed, this unexpected switch behaviour can trigger a single-node warmstart. (show details) 8.5.0.10
HU02456 FS5100, FS5200, FS7200, FS9200, V7000 Suggested Unseating a NVMe drive after automanage failure can cause a node to warmstart (show details) 8.5.0.10 Drives
SVAPAR-100958 All Suggested A single FCM may incorrectly report multiple medium errors for the same LBA (show details) 8.5.0.10 RAID
SVAPAR-107595 FS7300, FS9100, FS9200, FS9500, SVC Suggested Improve maximum throughput for Global Mirror, Metro Mirror and Hyperswap by providing more inter-node messaging resources (show details) 8.5.0.10 Global Mirror, HyperSwap, Metro Mirror, Performance
SVAPAR-109289 All Suggested Buffer overflow may occur when handling the maximum length of 55 characters for either Multi-Factor Authentication (MFA) or Single Sign On (SSO) client secrets (show details) 8.5.0.10 Backend Storage
SVAPAR-98576 All Suggested Customers cannot edit certain properties of a flashcopy mapping via the GUI flashcopy mappings panel as the edit modal does not appear. (show details) 8.5.0.10 FlashCopy, Graphical User Interface
SVAPAR-94179 FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500, V7000 HIPER Faulty hardware within or connected to the CPU can result in a reboot on the affected node. However it is possible for this to sometimes result in a reboot on the partner node (show details) 8.5.0.9 Reliability Availability Serviceability
SVAPAR-98567 FS5000 HIPER In FS50xx nodes, the TPM may become unresponsive after a number of weeks' runtime. This can lead to encryption or mdisk group CLI commands failing, or in some cases node warmstarts. This issue was partially addressed by SVAPAR-83290, but is fully resolved by this second fix. (show details) 8.5.0.9 Encryption
SVAPAR-98672 All Critical VMWare host crashes on servers connected using NVMe over Fibre Channel with the host_unmap setting disabled (show details) 8.5.0.9 NVMe
SVAPAR-98971 All Suggested The GUI may show repeated invalid pop-ups stating configuration node failover has occurred (show details) 8.5.0.9 Graphical User Interface
SVAPAR-89694 All HIPER Kernel panics might occur on a subset of Spectrum Virtualize Hardware Platforms with a 10G Ethernet adapter running 8.4.0.10, 8.5.0.7 and 8.5.3.1 when taking a snap. For more details refer to this Flash (show details) 8.5.0.8
HU02586 All Critical When deleting a safeguarded copy volume which is related to a restore operation and another related volume is offline, the system may warmstart repeatedly (show details) 8.5.0.8 Safeguarded Copy & Safeguarded Snapshots
SVAPAR-84116 All Critical The background delete processing for deduplicated volumes might not operate correctly if the preferred node for a deduplicated volume is changed while a delete is in progress. This can result in data loss which will be detected by the cluster when the data is next accessed (show details) 8.5.0.8 Data Reduction Pools, Deduplication
SVAPAR-87729 All Critical After a system has logged '3201 : Unable to send to the cloud callhome servers', the system may end up with an inconsistency in the Event Log. This inconsistency can cause a number of symptoms, including node warmstarts (show details) 8.5.0.8 Call Home
SVAPAR-89692 FS9500, SVC Critical Battery back-up units may reach end of life prematurely on FS9500 / SV3 systems, despite the batteries being in good physical health, which will result in node errors and potentially nodes going offline if both batteries are affected (show details) 8.5.0.8
SVAPAR-90438 All Critical A conflict of host IO on one node, with array resynchronisation task on the partner node, can result in some regions of parity inconsistency. This is due to the asynchronous parity update behaviour leaving invalid parity in the RAID internal cache (show details) 8.5.0.8 Distributed RAID
HU02565 All High Importance Node warmstart when generating data compression savings data for 'lsvdiskanalysis' (show details) 8.5.0.8
SVAPAR-82950 FS9500, SVC High Importance If a FlashSystem 9500 or SV3 node had a USB Flash Drive present at boot, upgrading to either 8.5.0.7 or 8.5.3.0 may cause the node to become unresponsive. Systems already running 8.5.0.7 or 8.5.3.0 are not affected by this issue (show details) 8.5.0.8 Reliability Availability Serviceability
SVAPAR-85980 All High Importance iSCSI response times may increase on some systems with 25Gb ethernet adapters, after upgrade to 8.4.0.9 or 8.5.x (show details) 8.5.0.8 Performance, System Update
SVAPAR-90395 FS9500, SVC High Importance FS9500 and SV3 might suffer from poor Remote Copy performance due to a lack of internal messaging resources (show details) 8.5.0.8 Global Mirror, Global Mirror With Change Volumes, HyperSwap, Metro Mirror
HU02594 All Suggested Initiating drive firmware update via management user interface for one drive class can prompt all drives to be updated (show details) 8.5.0.8 Drives, System Update
SVAPAR-89296 All Suggested Immediately after upgrade from pre-8.4.0 to 8.4.0 or later, EasyTier may stop promoting hot data to the tier0_flash tier if it contains non-FCM storage. This issue will automatically resolve on the next upgrade (show details) 8.5.0.8 EasyTier
HU02572 All HIPER When controllers running specified code levels with SAS storage are power cycled or rebooted, there is a chance that 56 bytes of data will be incorrectly restored into the cache, leading to undetected data corruption. The system will attempt to flush the cache before an upgrade, so this defect is less likely during an upgrade. (show details) 8.5.0.7 Drives
HU01782 All High Importance A node warmstart may occur due to a potentially bad SAS hardware component on the system such as a SAS cable, SAS expander or SAS HIC (show details) 8.5.0.7 Drives
HU02555 All High Importance A node may warmstart if the system is configured for remote authorization, but no remote authorization service, such as LDAP, has been configured (show details) 8.5.0.7 LDAP
HU02557 All High Importance Systems may be unable to upgrade from pre-8.5.0 to 8.5.0 due to a previous node upgrade and certain DRP conditions existing (show details) 8.5.0.7 Data Reduction Pools, System Update
SVAPAR-83290 FS5000 High Importance An issue with the Trusted Platform Module (TPM) in FlashSystem 50xx nodes may cause the TPM to become unresponsive. This can happen after a number of weeks of continuous runtime. (show details) 8.5.0.7
SVAPAR-84305 All High Importance A node may warmstart when attempting to run 'chsnmpserver -community' command without any additional parameter (show details) 8.5.0.7 System Monitoring
SVAPAR-84331 All High Importance A node may warmstart when the 'lsnvmefabric -remotenqn' command is run (show details) 8.5.0.7 NVMe
SVAPAR-85396 FS5000, FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500 High Importance Replacement Samsung NVME drives may show as unsupported, or they may fail during a firmware upgrade as unsupported, due to a VPD read problem (show details) 8.5.0.7 Drives
SVAPAR-86035 All High Importance Whilst completing a request, a DRP pool attempts to allocate additional metadata space, but there is no free space available. This causes the node to warmstart (show details) 8.5.0.7 Data Reduction Pools
HU02553 FS9500, SVC Suggested Remote copy relationships may not correctly display the name of the vdisk on the remote cluster (show details) 8.5.0.7 Global Mirror, Global Mirror With Change Volumes, Metro Mirror
HU02579 All Suggested The GUI 'Add External iSCSI Storage' wizard does not work with portsets. The ports are shown but are not selectable (show details) 8.5.0.7 Graphical User Interface, iSCSI
SVAPAR-84099 All Suggested An NVMe codepath exists where by strict state checking incorrectly decides that a software flag state is invalid, there by triggering a node warmstart (show details) 8.5.0.7 Hosts, NVMe
HU02475 All HIPER Power outage can cause reboots on nodes with 25Gb ethernet adapters, necessitating T3 recovery (show details) 8.5.0.6 Reliability Availability Serviceability
HU02420 All Critical During an array copyback it is possible for a memory leak to result in the progress stalling and an node warmstart of all nodes resulting in a temporary loss of access (show details) 8.5.0.6 RAID
HU02449 All Critical Due to a timing issue, it is possible (but very unlikely) that maintenance on a SAS 92F/92G expansion enclosure could cause multiple node warmstarts, leading to a loss of access (show details) 8.5.0.6 Backend Storage
HU02513 All Critical When upgrading one side of a cluster from 8.4.2 to either 8.5.0 or 8.5.2, when the other side of the cluster is still running 8.4.2, when you run either 'mkippartnership' or 'rmippartnership' commands from the cluster that is running 8.5.0 or 8.5.2, then an iplink node warmstart can occur (show details) 8.5.0.6 3-Site using HyperSwap or Metro Mirror, Global Mirror, Global Mirror With Change Volumes, Metro Mirror
HU02519 & HU02520 All Critical Safeguarded copy source vdisks go offline when its mappings and target vdisks are deleted then recreated in rapid succession (show details) 8.5.0.6 FlashCopy, Safeguarded Copy & Safeguarded Snapshots
HU02540 All Critical Deleting a HyperSwap volume copy with dependent Flashcopy mappings can trigger repeated node warmstarts (show details) 8.5.0.6 FlashCopy, HyperSwap
HU02541 All Critical In some circumstances, the deduplication replay process on a data reduction pool can become stuck. During this process, IO to the pool is quiesced and must wait for the replay to complete. Because it does not complete, IO to the entire storage pool hangs, which can eventually lead to a loss of access to data. (show details) 8.5.0.6 Data Reduction Pools, Deduplication
HU02542 All Critical On systems that are running 8.4.2 or 8.5.0, when deleting a Hyperswap volume, or Hyperswap volume copy, that has Safeguarded copy snapshots configured, a T2 recovery can occur causing loss of access to data. (show details) 8.5.0.6 HyperSwap, Safeguarded Copy & Safeguarded Snapshots
HU02551 All Critical When creating multiple volumes with a high mirroring sync rate, an node warmstart maybe triggered due to internal resource constraints (show details) 8.5.0.6 Global Mirror, Global Mirror With Change Volumes, Metro Mirror
HU02561 All Critical If there are a high number of FC mappings sharing the same target, the internal array that is used to track the FC mapping is mishandled, thereby causing it to overrun. This will cause a cluster wide warmstart to occur (show details) 8.5.0.6 FlashCopy
HU02563 All Critical Improve dimm slot identification for memory errors (show details) 8.5.0.6 Reliability Availability Serviceability
IT41088 FS5000, FS5100, FS5200, V5000, V5100 Critical Systems with low memory that have a large number of RAID arrays that are resyncing can cause a system to run out of RAID rebuild control blocks (show details) 8.5.0.6 RAID
HU02466 All High Importance An issue in the handling of drive failures can result in multiple node warmstarts (show details) 8.5.0.6 RAID
HU02490 FS9500 High Importance Upon first boot, or subsequent boots of a FS9500 a 1034 error may appear in the event log that states that the CPU PCIe link is degraded (show details) 8.5.0.6 Reliability Availability Serviceability
HU02507 All High Importance A timing window exists in the code that handles host aborts for an ATS (Atomic Test and Set) command, if the host is NVMe-attached. This can cause repeated node warmstarts. (show details) 8.5.0.6 Host Cluster, Hosts
HU02511 All High Importance Code version 8.5.0 includes a change in the driver setting for the 25Gb ethernet adapter. This change can cause port errors, which in turn can cause iSCSI path loss symptoms (show details) 8.5.0.6 Host Cluster, Hosts, SCSI Unmap, iSCSI
HU02522 All High Importance When upgrading from 8.4.1 or lower to a level that uses IP portsets (8.4.2 or higher), there is an issue when the port ID on each node has a different remote copy use (show details) 8.5.0.6 Global Mirror, Global Mirror With Change Volumes, Metro Mirror
HU02525 FS5000, FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500, SVC, V7000 High Importance Code versions 8.4.2.x, 8.5.0.0 - 8.5.0.5 and 8.5.1.0 permitted the use of an iSCSI prefix of 0. However, during an upgrade to 8.5.x, this can prevent all iSCSI hosts from re-establishing iSCSI sessions, thereby causing access loss (show details) 8.5.0.6 Hosts, iSCSI
HU02530 All High Importance Upgrades from 8.4.2 or 8.5 fail to start on some platforms (show details) 8.5.0.6 System Update
HU02534 All High Importance When upgrading from 7.8.1.5 to 8.5.0.4, PowerHA stops working due to SSH configuration changes (show details) 8.5.0.6 Reliability Availability Serviceability
HU02549 All High Importance When upgrading from a lower level, to 8.5 or higher for the first time, an unexpected node warmstart may occur that can lead to a stalled upgrade (show details) 8.5.0.6 System Update
HU02558 FS5000, FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500, SVC, V7000 High Importance A timing window exists if a node encounters repeated timeouts on I/O compression requests. This can cause two threads to conflict with each other, thereby causing a deadlock condition to occur. (show details) 8.5.0.6 Compression
HU02562 All High Importance A node can warmstart when a 32 Gb Fibre Channel adapter receives an unexpected asynchronous event via internal mailbox commands. This is a transient failure caused during DMA operations (show details) 8.5.0.6
IT41447 All High Importance When removing the DNS server configuration, a node may discover unexpected metadata and warmstart (show details) 8.5.0.6 Reliability Availability Serviceability
IT41835 All High Importance A T2 recovery may occur when a failed drive in the system is replaced with an unsupported drive type (show details) 8.5.0.6 Drives
HU02320 All Suggested A battery fails to perform re-condition. This is identified when 'lsenclosurebattery' shows the 'last_recondition_timestamp' as an empty field on the impacted node (show details) 8.5.0.6
HU02372 FS9100, SVC, V5000, V5100, V7000 Suggested Host SAS port 4 is missing from the GUI view on some systems. (show details) 8.5.0.6 Graphical User Interface
HU02463 All Suggested LDAP user accounts can become locked out because of multiple failed login attempts (show details) 8.5.0.6 Graphical User Interface, LDAP
HU02468 All Suggested lsvdisk preferred_node_id filter not working correctly (show details) 8.5.0.6 Command Line Interface
HU02474 All Suggested An SFP failure can cause a node warmstart (show details) 8.5.0.6 Reliability Availability Serviceability
HU02487 All Suggested Problems expanding the size of a volume using the GUI (show details) 8.5.0.6 Graphical User Interface
HU02508 All Suggested The mkippartnership cli command does not allow a portset with a space in the name as a parameter. (show details) 8.5.0.6 Command Line Interface
HU02528 All Suggested When upgrading to 8.5.0 or higher, a situation may occur whereby a variable is not locked at the correct point, resulting in a mismatch. The system code detects this and initiates a warmstart to reset any erroneous values (show details) 8.5.0.6 Reliability Availability Serviceability
HU02543 All Suggested After upgrade to 850, the 'lshost -delim' command shows hosts in offline state, while 'lshost' shows them online (show details) 8.5.0.6 Command Line Interface
HU02559 All Suggested A GUI resource issue may cause an out-of-memory condition, leading to the CIMOM and GUI becoming unresponsive, or showing incomplete information (show details) 8.5.0.6 Graphical User Interface
HU02560 All Suggested When creating a SAS host using the GUI, portset is incorrectly added. The command fails with CMMVC9777E as the portset parameter is not supported with the given type of host. (show details) 8.5.0.6 Graphical User Interface, Hosts
HU02564 All Suggested The 'charraymember' command fails with a degraded DRAID array, even though the syntax of the command is correct (show details) 8.5.0.6 Distributed RAID
IT42403 All Suggested A limit is in place to prevent the use of 8TB drives or larger in RAID5 arrays due to the risk of data loss during an extended rebuild. This limit was intended for 8 TiB however it was implemented as 8TB. A 7.3TiB drive has a capacity of 8.02TB and as a result was incorrectly prevented from use in RAID5 (show details) 8.5.0.6 Distributed RAID, Drives, RAID
SVAPAR-93987 All Suggested A timeout may cause a single node warmstart, if a FlashCopy configuration change occurs while there are many I/O requests outstanding for a source volume which has multiple FlashCopy targets (show details) 8.5.0.6 FlashCopy
HU02500 All Critical If a volume in a FlashCopy mapping is deleted, and the deletion fails (for example because the user does not have the correct permissions to delete that volume), node warmstarts can occur, leading to loss of access (show details) 8.5.0.5 FlashCopy
HU02502 All Critical On upgrade to v8.4.2 or later with FlashCopy active, a node warmstart can occur, leading to a loss of access (show details) 8.5.0.5 FlashCopy
IT41173 FS5200 Critical If the temperature sensor in an FS5200 system fails in a particular way, it is possible for drives to be powered off, causing a loss of access to data. This type of temperature sensor failure is very rare. (show details) 8.5.0.5 Reliability Availability Serviceability
HU02339 All High Importance Multiple node warmstarts can occur if a system has direct Fibre Channel connections to an IBM i host, causing loss of access to data (show details) 8.5.0.5 Hosts, Interoperability
HU02464 All High Importance An issue in the processing of NVMe host logouts can cause multiple node warmstarts (show details) 8.5.0.5 Hosts, NVMe
HU02479 All High Importance If an NVMe host cancels a large number of I/O requests, multiple node warmstarts might occur (show details) 8.5.0.5 Hosts
HU02492 SVC High Importance Configuration backup can fail after upgrade to v8.5. This only occurs on a very small number of systems that have a particular internal cluster state. If a system is running v8.5 and does not have an informational eventlog entry with error ID 988100 (CRON job failed), then it is not affected. (show details) 8.5.0.5 Reliability Availability Serviceability
HU02497 All High Importance A system with direct Fibre Channel connections to a host, or to another Spectrum Virtualize system, might experience multiple node warmstarts (show details) 8.5.0.5 Hosts, Interoperability
HU02512 FS5000 High Importance An FS5000 system with a Fibre Channel direct-attached host can experience multiple node warmstarts (show details) 8.5.0.5 Hosts
IT41191 All High Importance If a REST API client authenticates as an LDAP user, a node warmstart can occur (show details) 8.5.0.5 REST API
HU02484 All Suggested The GUI does not allow expansion of DRP thin or compressed volumes (show details) 8.5.0.5 Data Reduction Pools, Graphical User Interface
HU02491 All Suggested On upgrade from v8.3.x, v8.4.0 or v8.4.1 to v8.5, if the system has Global Mirror with Change Volumes relationships, a single node warmstart can occur (show details) 8.5.0.5 Global Mirror With Change Volumes
HU02494 All Suggested A system with a DNS server configured, which cannot ping the server, will log information events in the eventlog. In some environments the firewall blocks ping packets but allows DNS lookup, so this APAR disables these events. (show details) 8.5.0.5 Reliability Availability Serviceability
HU02498 All Suggested If a host object with no ports exists on upgrade to v8.5, the GUI volume mapping panel may fail to load. (show details) 8.5.0.5 Graphical User Interface
HU02501 All Suggested If an internal I/O timeout occurs in a RAID array, a node warmstart can occur (show details) 8.5.0.5 RAID
HU02503 All Suggested The Date / Time panel can fail to load in the GUI when a timezone set via the CLI is not supported by the GUI (show details) 8.5.0.5 Graphical User Interface
HU02504 All Suggested The Date / Time panel can display an incorrect timezone and default to manual time setting rather than NTP (show details) 8.5.0.5 Graphical User Interface
HU02505 All Suggested A single node warmstart can occur on v8.5 systems running DRP, due to a low-probability timing window during normal running (show details) 8.5.0.5 Data Reduction Pools
HU02509 All Suggested Upgrade to v8.5 can cause a single node warmstart, if nodes previously underwent a memory upgrade while DRP was in use (show details) 8.5.0.5 Data Reduction Pools
HU02514 All Suggested Firmware upgrade may fail for certain drive types, with the error message CMMVC6567E The Apply Drive Software task cannot be initiated because no download images were found in the package file (show details) 8.5.0.5 Drives
HU02515 FS9500 Suggested Fan speed on FlashSystem 9500 can be higher than expected, if a high drive temperature is detected (show details) 8.5.0.5 Drives
HU02506 All Critical On a system where NPIV is disabled or in transitional mode, certain hosts may fail to log in after a node warmstart or reboot (for example during an upgrade), leading to loss of access. (show details) 8.5.0.4 Hosts
HU02441 & HU02486 All Critical Safeguarded Copy with DRP can cause node warmstarts and mdisk timeouts (show details) 8.5.0.3 Data Reduction Pools, Safeguarded Copy & Safeguarded Snapshots
HU02488 All High Importance Remote Copy partnerships disconnect every 15 minutes with error 987301 (Connection to a configured remote cluster has been lost) (show details) 8.5.0.3 Global Mirror, Global Mirror With Change Volumes, Metro Mirror
HU02453 All Suggested It may not be possible to connect to GUI or CLI without a restart of the Tomcat server (show details) 8.5.0.2 Command Line Interface, Graphical User Interface
IT40059 FS5200, FS7200, FS7300, FS9200, FS9500 Suggested Port to node metrics can appear inflated due to an issue in performance statistics aggregation (show details) 8.5.0.2 Inter-node messaging, System Monitoring
HU02261 All HIPER A Data Reduction Pool may be taken offline when metadata is detected to hold an invalid compression flag. For more details refer to this Flash (show details) 8.5.0.0 Data Reduction Pools
HU02277 All HIPER RAID parity scrubbing can become stalled causing an accumulation of media errors leading to multiple drive failures with the possibility of data integrity loss. For more details refer to this Flash (show details) 8.5.0.0 RAID
HU02296 All HIPER The zero page functionality can become corrupt causing a volume to be initialised with non-zero data (show details) 8.5.0.0 Storage Virtualisation
HU02310 All HIPER Where a FlashCopy mapping exists between two volumes in the same Data Reduction Pool and the same I/O group, and the target volume has deduplication enabled, then the target may contain invalid data (show details) 8.5.0.0 Data Reduction Pools, FlashCopy, Global Mirror With Change Volumes
HU02312 All HIPER Changing the preferred node for a volume when it is in a remote copy relationship can result in multiple node warmstarts. For more details refer to this Flash (show details) 8.5.0.0 Global Mirror, Global Mirror With Change Volumes, Metro Mirror
HU02313 FS5100, FS7200, FS9100, FS9200, V5100, V7000 HIPER When a FlashCore Module (FCM) fails there is a chance that this can trigger other FCMs in the same control enclosure to also fail. If enough additional drives fail, at the same time, this can take the array offline and cause a loss of access to data. For more details refer to this Flash (show details) 8.5.0.0 Drives
HU02338 All HIPER An issue in the setting up of reverse FlashCopy mappings can cause the background copy to finish prematurely providing an incomplete target image (show details) 8.5.0.0 FlashCopy
HU02340 All HIPER High replication workloads can cause multiple warmstarts with a loss of access at the partner cluster (show details) 8.5.0.0 IP Replication
HU02384 SVC HIPER An inter-node message queue can become stalled, leading to an I/O timeout warmstart, and temporary loss of access (show details) 8.5.0.0 Reliability Availability Serviceability
HU02400 All HIPER A problem in the virtualization component of the system can cause a migration IO to be submitted in an incorrect context resulting in a node warmstart. In some cases it is possible that this IO has been submitted to an incorrect location on the backend, which can cause data corruption of an isolated small area (show details) 8.5.0.0 Storage Virtualisation
HU02418 All HIPER During a DRAID array rebuild data can be written to an incorrect location. For more details refer to this Flash (show details) 8.5.0.0 Distributed RAID, RAID
DT112601 All Critical Deleting image mode mounted source volume while migration is ongoing could trigger Tier 2 recovery (show details) 8.5.0.0 Storage Virtualisation
HU02226 All Critical Due to an issue in DRP a node can repeatedly warmstart whilst rejoining a cluster (show details) 8.5.0.0 Data Reduction Pools
HU02282 All Critical After a code upgrade the config node may exhibit high write response times. In exceptionally rare circumstances an Mdisk group may be taken offline (show details) 8.5.0.0 Cache
HU02295 SVC Critical When upgrading from v8.2.1 or v8.3, in the presence of hot spare nodes, an issue with the handling of node metadata may cause a Tier 2 recovery (show details) 8.5.0.0 System Update
HU02309 All Critical Due to a change in how FlashCopy and remote copy interact, multiple warmstarts may occur with the possibility of lease expiries (show details) 8.5.0.0 Global Mirror With Change Volumes
HU02315 All Critical Failover for VMware iSER hosts may pause I/O for more than 120 seconds (show details) 8.5.0.0 Hosts
HU02321 All Critical Where nodes relying on RDMA clustering alone, if a node is removed, warmstarts or goes down for upgrade there may be a delay in internode communication leading to lease expiries (show details) 8.5.0.0 iSCSI
HU02328 FS5100, FS7200, FS9100, FS9200, V5100, V7000 Critical Due to an issue with the handling of NVMe registration keys, changing the node WWNN in an active system will cause a lease expiry (show details) 8.5.0.0 NVMe
HU02342 All Critical Occasionally when an offline drive returns to online state later than its peers in the same RAID array there can be multiple node warmstarts that send nodes into a service state (show details) 8.5.0.0 RAID
HU02349 All Critical Using an incorrect FlashCopy consistency group id to stop consistency group will result in T2 recovery if the incorrect id is >501 (show details) 8.5.0.0 FlashCopy
HU02368 All Critical When consistency groups from code levels prior to v8.3 are carried through to v8.3 or later then there can be multiple node warmstarts with the possibility of a loss of access (show details) 8.5.0.0 HyperSwap
HU02373 All Critical An incorrect compression flag in metadata can take a DRP offline (show details) 8.5.0.0 Data Reduction Pools
HU02374 SVC, V5000, V7000 Critical Hosts with Emulex 16Gbps HBAs may become unable to communicate with a system with 8Gbps Fibre Channel ports, after the host HBA is upgraded to firmware version 12.8.364.11. This does not apply to systems with 16Gb or 32Gb Fibre Channel ports (show details) 8.5.0.0 Hosts
HU02378 All Critical Multiple maximum replication delay events and Remote Copy relationship restarts can cause multiple node warmstarts with the possibility of a loss of access (show details) 8.5.0.0 Global Mirror, Global Mirror With Change Volumes, Metro Mirror
HU02393 All Critical Automatic resize of compressed/thin volumes may fail causing warmstarts on both nodes in an I/O group (show details) 8.5.0.0 Storage Virtualisation
HU02397 All Critical A Data Reduction Pool, with deduplication enabled, can retain some stale state after deletion and recreation. This has no immediate effect. However if later on a node goes offline this condition can cause the pool to be taken offline (show details) 8.5.0.0 Data Reduction Pools
HU02401 All Critical EasyTier can move extents between identical mdisks until one runs out of space (show details) 8.5.0.0 EasyTier
HU02402 All Critical The remote support feature may use more memory than expected causing a temporary loss of access (show details) 8.5.0.0 Support Remote Assist
HU02406 All Critical An interoperability issue between Cisco NX-OS firmware and the Spectrum Virtualize Fibre Channel driver can cause a node warmstart on NPIV failback (for example during an upgrade) with the potential for a loss of access. For more details refer to this Flash (show details) 8.5.0.0 Interoperability
HU02409 All Critical If rmhost command, with -force, is executed for a MS Windows server then an issue in the iSCSI driver can cause the relevant target initiator to become unresponsive (show details) 8.5.0.0 Hosts, iSCSI
HU02410 SVC Critical A timing window issue in the transition to a spare node can cause a cluster-wide Tier 2 recovery (show details) 8.5.0.0 Hot Spare Node
HU02414 All Critical Under specific sequence and timing of circumstances the garbage collection process can timeout and take a pool offline temporarily (show details) 8.5.0.0 Data Reduction Pools
HU02415 All Critical An issue in garbage collection IO flow logic can take a pool offline temporarily (show details) 8.5.0.0 Data Reduction Pools
HU02421 All Critical A logic fault in the socket communication sub-system can cause multiple node warmstarts when more than 8 external clients attempt to connect. It is possible for this to lead to a loss of access (show details) 8.5.0.0 Reliability Availability Serviceability
HU02423 All Critical Volume copies may be taken offline even though there is sufficient free capacity (show details) 8.5.0.0 Data Reduction Pools
HU02428 All Critical Issuing a movevdisk CLI command immediately after removing an associated GMCV relationship can trigger a Tier 2 recovery (show details) 8.5.0.0 Command Line Interface, Global Mirror With Change Volumes
HU02429 All Critical System can go offline shortly after changing the SMTP settings using the chemailserver command via the GUI (show details) 8.5.0.0 System Monitoring
HU02430 All Critical Expanding or shrinking the real size of FlashCopy target volumes can cause recurring node warmstarts and may cause nodes to revert to candidate state (show details) 8.5.0.0 FlashCopy
HU02434 All Critical An issue in the internal accounting of FlashCopy resources can lead to multiple node warmstarts taking a cluster offline (show details) 8.5.0.0 FlashCopy
HU02435 All Critical The removal of deduplicated volumes can cause repeated node warmstarts and the possibility of offline Data Reduction Pools (show details) 8.5.0.0 Data Reduction Pools
HU02440 All Critical Using the migrateexts command when both source and target mdisks are unmanaged can trigger a Tier 2 recovery (show details) 8.5.0.0 Command Line Interface, Storage Virtualisation
HU02442 All Critical Issuing a lspotentialarraysize CLI command with an invalid drive class can trigger a Tier 2 recovery (show details) 8.5.0.0 Command Line Interface
HU02455 All Critical After converting a system from 3-site to 2-site a timing window issue can trigger a cluster tier 2 recovery (show details) 8.5.0.0 3-Site using HyperSwap or Metro Mirror
HU02088 All High Importance There can be multiple node warmstarts when no mailservers are configured (show details) 8.5.0.0 System Monitoring
HU02127 All High Importance 32Gbps FC ports will auto-negotiate to 8Gbps, if they are connected to a 16Gbps Cisco switch port (show details) 8.5.0.0 Performance
HU02201 & HU02221 All High Importance Shortly after upgrading drive firmware, specific drive models can fail due to Too many long IOs to drive for too long errors (show details) 8.5.0.0 Drives
HU02227 FS7200, FS9100, FS9200, SVC, V5100, V7000 High Importance Certain I/O patterns can cause compression hardware to post errors. When those errors exceed a threshold the node can be taken offline (show details) 8.5.0.0 Compression
HU02273 All High Importance When write I/O workload to a HyperSwap volume site reaches a certain thresholds, the system should switch the primary and secondary copies. There are circumstances where this will not happen (show details) 8.5.0.0 HyperSwap
HU02290 All High Importance An issue in the virtualization component can divide up IO resources incorrectly leading to adverse impact on queuing times for mdisks CPU cores leading to performance impact (show details) 8.5.0.0 Storage Virtualisation
HU02297 All High Importance Error handling for a failing backend controller can lead to multiple warmstarts (show details) 8.5.0.0 Backend Storage
HU02300 All High Importance Use of Enhanced Callhome in censored mode may lead to adverse performance around 02:00 (2AM) (show details) 8.5.0.0 System Monitoring
HU02301 SVC High Importance iSCSI hosts connected to iWARP 25G adapters may experience adverse performance impacts (show details) 8.5.0.0 iSCSI
HU02304 FS9100, V5100, V7000 High Importance Some RAID operations for certain NVMe drives may cause adverse I/O performance (show details) 8.5.0.0 RAID
HU02311 All High Importance An issue in volume copy flushing may lead to higher than expected write cache delays (show details) 8.5.0.0 Cache
HU02317 All High Importance A DRAID expansion can stall shortly after it is initiated (show details) 8.5.0.0 Distributed RAID
HU02319 All High Importance The GUI can become unresponsive (show details) 8.5.0.0 Graphical User Interface
HU02326 SVC High Importance Delays in passing messages between nodes in an I/O group can adversely impact write performance (show details) 8.5.0.0 Performance
HU02343 All High Importance For Huawei Dorado V3 Series backend controllers it is possible that not all available target ports will be utilized. This would reduce the potential IO throughput and can cause high read/write backend queue time on the cluster impacting front end latency for hosts (show details) 8.5.0.0 Backend Storage
HU02345 All High Importance When connectivity to nodes in a local or remote cluster is lost, inflight IO can become stuck in an aborting state, consuming system resources and potentially adversely impacting performance (show details) 8.5.0.0 HyperSwap, Metro Mirror
HU02347 All High Importance An issue in the handling of boot drive failure can lead to the partner drive also being failed (show details) 8.5.0.0 Reliability Availability Serviceability
HU02360 All High Importance Cloud Callhome may stop working and provide no indication of this in the event log. For more details refer to this Flash (show details) 8.5.0.0 System Monitoring
HU02362 FS5100, FS5200, FS7200, FS9100, FS9200, SVC, V5100, V7000 High Importance When the RAID scrub process encounters bad grains, the peak response time for reads and writes can be adversely impacted (show details) 8.5.0.0 RAID
HU02376 All High Importance FlashCopy maps may get stuck at 99% due to inconsistent metadata accounting between nodes (show details) 8.5.0.0 FlashCopy
HU02388 FS5000, V5000 High Importance GUI can hang randomly due to an out of memory issue after running any task (show details) 8.5.0.0 Graphical User Interface
HU02392 All High Importance Validation in the Upload Support Package feature will reject new case number formats in the PMR field (show details) 8.5.0.0 Support Data Collection
HU02417 All High Importance Restoring a reverse FlashCopy mapping to a volume that is also the source of an incremental FlashCopy mapping can take longer than expected (show details) 8.5.0.0 FlashCopy
HU02422 All High Importance GUI performance can be degraded when displaying large numbers of volumes or other objects (show details) 8.5.0.0 Graphical User Interface
HU02438 All High Importance Certain conditions can provoke a cache behaviour that unbalances workload distribution across CPU cores leading to performance impact (show details) 8.5.0.0 Cache
HU02439 All High Importance An IP partnership between a pre-v8.4.2 system and v8.4.2 or later system may be disconnected because of a keepalive timeout (show details) 8.5.0.0 IP Replication
HU02460 All High Importance Multiple node warmstarts triggered by ports on the 32G fibre channel adapter failing (show details) 8.5.0.0 Hosts
IT38015 All High Importance During RAID rebuild or copyback on systems with 16gb or less of memory, cache handling can lead to a deadlock which results in timeouts (show details) 8.5.0.0 RAID
HU01209 All Suggested It is possible for the Fibre Channel driver to be offered an unsupported length of data resulting in a node warmstart (show details) 8.5.0.0 Storage Virtualisation
HU02095 All Suggested The effective_used_capacity field of lsarray/lsmdisk commands should be empty for RAID arrays which do not contain overprovisioned drives. However, sometimes this field can be zero even though it should be empty. This can cause incorrect provisioned capacity reporting in the GUI (show details) 8.5.0.0 Graphical User Interface
HU02171 All Suggested The timezone for Iceland is set incorrectly (show details) 8.5.0.0 Support Data Collection
HU02174 All Suggested A timing window issue related to remote copy memory allocation can result in a node warmstart (show details) 8.5.0.0 Global Mirror, Global Mirror With Change Volumes, Metro Mirror
HU02243 All Suggested DMP for 1670 event (replace CMOS) will shutdown a node without confirmation from user (show details) 8.5.0.0 GUI Fix Procedure
HU02263 All Suggested The pool properties dialog in the GUI displays thin-provisioning savings, compression savings and total savings. In Data Reduction Pools, the thin-provisioning savings displayed are actually the total savings instead of the thin-provisioning savings only (show details) 8.5.0.0 Data Reduction Pools
HU02274 All Suggested Due to a timing issue in how events are handled an active quorum loss and re-acquisition cycle can be triggered with a 3124 error (show details) 8.5.0.0 Quorum
HU02280 All Suggested Spectrum Control or Storage Insights may be unable to collect stats after a Tier 2 recovery or system powerdown (show details) 8.5.0.0 System Monitoring
HU02291 All Suggested Internal counters for upper cache stage/destage I/O rates and latencies are not collected and zeroes are usually displayed (show details) 8.5.0.0 Cache, System Monitoring
HU02292 & HU02308 All Suggested The use of maximum replication delay within Global Mirror may occasionally cause a node warmstart (show details) 8.5.0.0 Global Mirror
HU02303 & HU02305 All Suggested Configuration node warmstart will occur if mkhostcluster is run with -ignoreseedvolume and the ignored volumes have an id greater than 256 (show details) 8.5.0.0 Hosts
HU02306 All Suggested An offline host port can still be shown as active in lsfabric and the associated host can be shown as online despite being offline (show details) 8.5.0.0 Hosts
HU02325 All Suggested Tier 2 and Tier 3 recoveries can fail due to node warmstarts (show details) 8.5.0.0 Reliability Availability Serviceability
HU02331 All Suggested Due to a threshold issue an error code 3400 may appear too often in the event log (show details) 8.5.0.0 Compression
HU02332 & HU02336 All Suggested When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart (show details) 8.5.0.0 Hosts
HU02346 All Suggested A mismatch between LBA stored by snapshot and disk allocator processes in the thin-provisioning component may cause a single node warmstart (show details) 8.5.0.0 Thin Provisioning
HU02366 All Suggested Slow internal resource reclamation by the RAID component can cause a node warmstart (show details) 8.5.0.0 RAID
HU02367 All Suggested An issue with how RAID handles drive failures may lead to a node warmstart (show details) 8.5.0.0 RAID
HU02375 All Suggested An issue in how the GUI handles volume data can adversely impact its responsiveness (show details) 8.5.0.0 Graphical User Interface
HU02381 All Suggested When the proxy server password is changed to one with more than 40 characters the config node will warmstart (show details) 8.5.0.0 Command Line Interface
HU02382 FS5100, FS7200, FS9100, FS9200, V5100, V7000 Suggested A complex interaction of tasks, including drive firmware cleanup and syslog reconfiguration, can cause a 10 second delay when each node unpends (eg during an upgrade) (show details) 8.5.0.0 System Update
HU02383 FS5100, FS7200, FS9100, FS9200, V7000 Suggested An additional 20 second IO delay can occur when a system update commits (show details) 8.5.0.0 System Update
HU02385 All Suggested Unexpected emails from Inventory Script can be found on mailserver (show details) 8.5.0.0 System Monitoring
HU02386 FS5100, FS7200, FS9100, FS9200, V7000 Suggested Enclosure fault LED can remain on due to race condition when location LED state is changed (show details) 8.5.0.0 System Monitoring
HU02387 All Suggested When using the GUI the maximum Data Reduction Pools limitation incorrectly includes child pools (show details) 8.5.0.0 Data Reduction Pools
HU02391 All Suggested An issue with how websockets connections are handled can cause the GUI to become unresponsive requiring a restart of the Tomcat server (show details) 8.5.0.0 Graphical User Interface
HU02405 FS5200 Suggested An issue in the zero detection of the new Message Passing (MP) functionality can cause thin volumes to allocate space when writing zeros (show details) 8.5.0.0 Inter-node messaging
HU02411 FS5100, FS7200, FS9100, FS9200, V5100, V7000 Suggested An issue in the NVMe drive presence checking can result in a node warmstart (show details) 8.5.0.0 NVMe
HU02416 All Suggested A timing window issue in DRP can cause a valid condition to be deemed invalid triggering a single node warmstart (show details) 8.5.0.0 Data Reduction Pools
HU02419 All Suggested During creation of a drive FRU id the resulting unique number can contain a space character which can lead to CLI commands, that return this value, presenting it as a truncated string (show details) 8.5.0.0 Command Line Interface, Drives
HU02425 All Suggested An issue in the handling of internal messages, when the system has a high IO workload to two or more different FlashCopy maps in the same dependency chain, can result in incorrect counters. The node will warmstart to clear this condition. (show details) 8.5.0.0 FlashCopy
HU02426 All Suggested Where an email server accepts the STARTTLS command during the initial handshake if TLS v1.2 is disables or not supported then the system will be unable to send email alerts (show details) 8.5.0.0 System Monitoring
HU02437 All Suggested Error 2700 is not reported in the Event Log when an incorrect NTP server IP is entered (show details) 8.5.0.0 System Monitoring
HU02443 All Suggested An inefficiency in the RAID code that processes requests to free memory can cause the request to timeout leading to a node warmstart (show details) 8.5.0.0 RAID
HU02444 All Suggested Some security scanners can report unauthenticated targets against all the iSCSI IP addresses of a node (show details) 8.5.0.0 Hosts, iSCSI
HU02445 All Suggested When attempting to expand a volume, if the volume size is greater than 1TB the GUI may not display the expansion pop-up window (show details) 8.5.0.0 Graphical User Interface
HU02448 All Suggested IP Replication statistics displayed in the GUI and XML can be incorrect (show details) 8.5.0.0 System Monitoring
HU02450 FS5100, FS5200, FS7200, FS9100, FS9200, SVC, V5100, V7000 Suggested A defect in the frame switching functionality of 32Gbps HBA firmware can cause a node warmstart (show details) 8.5.0.0 Hosts
HU02452 FS5100, FS5200, FS7200, FS9100, FS9200, SVC, V5100, V7000 Suggested An issue in NVMe I/O write functionality can cause a single node warmstart (show details) 8.5.0.0 NVMe
HU02454 All Suggested Large numbers of 2251 errors are recorded in the Event Log even though LDAP appears to be working (show details) 8.5.0.0 LDAP
HU02461 All Suggested Livedump collection can fail multiple times (show details) 8.5.0.0 Support Data Collection
HU02593 All Suggested NVMe drive is incorrectly reporting end of life due to flash degradation (show details) 8.5.0.0 Drives
IT33996 All Suggested An issue in RAID where unreserved resources fail to be freed up can result in a node warmstart (show details) 8.5.0.0 RAID
IT34949 All Suggested lsnodevpd may show DIMM information in the wrong positions (show details) 8.5.0.0 Command Line Interface, Graphical User Interface
IT34958 All Suggested During a system update a node returning to the cluster, after upgrade, may warmstart (show details) 8.5.0.0 System Update
IT37654 All Suggested When creating a new encrypted array the CMMVC8534E error (Node has insufficient entropy to generate key material) can appear preventing array creation (show details) 8.5.0.0 Encryption
IT38858 All Suggested Unable to resume Enable USB Encryption wizard via the GUI. The GUI will display error CMMVC9231E (show details) 8.5.0.0 Graphical User Interface

4. Useful Links

Description Link
Product Documentation
Update Matrices, including detailed build version
Support Information pages providing links to the following information:
  • Interoperability information
  • Product documentation
  • Limitations and restrictions, including maximum configuration limits
Spectrum Virtualize Family of Products Inter-System Metro Mirror and Global Mirror Compatibility Cross Reference
Software Upgrade Test Utility
Software Upgrade Planning