Release Note for systems built with IBM Storage Virtualize


This is the release note for the 8.6.0 release and details the issues resolved in all Program Temporary Fixes (PTFs) between 8.6.0.0 and 8.6.0.7 . This document will be updated with additional information whenever a PTF is released.

This document was last updated on 7 April 2025

  1. New Features
  2. Known Issues and Restrictions
  3. Issues Resolved
    1. Security Issues Resolved
    2. APARs Resolved
  4. Useful Links

Note. Detailed build version numbers are included in the Update Matrices in the Useful Links section


1. New Features

The following new features have been introduced in the 8.6.0.3 PTF release: The following new features have been introduced in the 8.6.0 release: The following new features have been introduced in the Non-LTS 8.5.4 release: The following new features have been introduced in the Non-LTS 8.5.3 release: The following new features have been introduced in the Non-LTS 8.5.2.1 release: The following new features have been introduced in the Non-LTS 8.5.2 release: The following new features have been introduced in the Non-LTS 8.5.1 release:

2. Known Issues and Restrictions

Note: For clarity, the terms "node" and "canister" are used interchangeably.
Details Introduced

This release includes updated battery firmware that improves both short-term and long-term battery reliability. After the firmware update is complete, there is a small chance that one or more batteries will log an error to indicate they need to be replaced. This error does not cause the battery to go offline, and it does not affect the operation of the system. Open a support ticket for battery replacement if you see this error.

8.6.0.7

The three site orchestrator is not compatible with the new SSH security level 4.

8.6.0.3

Systems using asynchronous policy-based replication should upgrade to 8.6.0.2 or later

8.6.0.2

IBM Storage Virtualize for Public Cloud supports policy-based replication on the following platforms only:

  • Amazon Web Services (AWS) - c5.18xlarge platform only
  • AZURE - D32 and D64 platform only
8.6.0.0

IBM Storage Virtualize for Public Cloud only supports integration with IBM Storage Insights by using Call Home with cloud services for systems running on the AZURE D64 platform

8.6.0.0

Upgrade from Spectrum Virtualize for Public Cloud 8.5.4.0 to Storage Virtualize for Public Cloud 8.6.0 is not supported on Amazon Web Services (AWS)

8.6.0.0
The following restrictions were valid but have now been lifted

Due to a known issue that occurred following a cluster outage, while a DRAID1 array was expanding, expansion of DRAID1 arrays was not supported on 8.4.0 and higher.

This issue has now been lifted in 8.6.0.4 by SVAPAR-132123.

8.4.0.0

IBM Spectrum Virtualize for Public Cloud on Amazon Web Services (AWS) does not support patch installation using 'applysoftware' or 'installsoftware' command line interface

8.5.4.0

For SAN Volume Controller SA2 nodes only, upgrading to 8.6.0.0 is currently not supported if a RoCE 25Gb adapter is installed.

This restriction has now been lifted in 8.6.0.1 under SVAPAR-98128

8.6.0.0

If an external storage controller has over-provisioned storage, then upgrade from pre 8.6.x is blocked.

This restriction has now been lifted in 8.6.0.1 by APAR SVAPAR-98893

8.6.0.0

For FS9500 systems only, upgrading to 8.6.0.0 is currently not supported with VMware Virtual Volumes (vVols) configured

This restriction has now been lifted in 8.6.0.1

8.6.0.0

Upgrade to 8.5.1 and higher is not currently supported on systems with compressed volume copies in Data Reduction Pools, to avoid SVAPAR-105430.

On a small proportion of systems, this can cause a node warmstart when specific data patterns are written to compressed volumes.

This restriction has now been lifted in 8.6.0.2.

8.5.1.0

Systems with ROCE Adapters cannot have MTU greater than 1500 on 8.6.0.0 or later. The workaround is to reduce MTU to 1500.

This restriction has now been partially lifted in 8.6.0.2.

Systems with ROCE Adapters using iSCSI protocol can set MTU to 9000. Systems using NVMe-TCP/RDMA protocol still remain restricted to 1500 MTU value.

8.6.0.0

Upgrade from 8.5.4 or earlier is currently not supported on nodes with 32GB or less RAM and 25Gb RoCE adapters, due to SVAPAR-104159

This restriction has now been lifted in 8.6.0.2.

8.6.0.0

Remote Support Assistance (RSA) cannot be enabled on FS9500 systems with MTM 4983-AH8. Attempting to enable RSA will fail with: CMMVC8292E The command failed because the feature is not supported on this platform.

This restriction has now been lifted in 8.6.0.2.

8.6.0.0

If Veeam 12.1 was used with Storage Virtualize 8.5.1 or later, and the Veeam user was in an ownership group, this could cause node warmstarts due to SVAPAR-138214.

This restriction has now been lifted in 8.6.0.6.

8.5.1.0

3. Issues Resolved

This release contains all of the fixes included in the 8.5.0.0 release, plus the following additional fixes.

A release may contain fixes for security issues, fixes for APARs or both. Consult both tables below to understand the complete set of fixes included in the release.

3.1 Security Issues Resolved

Security issues are documented using a reference number provided by "Common Vulnerabilities and Exposures" (CVE).
CVE Identifier
Link for additional Information
Resolved in
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
CVE-2025-0159 7184182 8.6.0.6
CVE-2025-0160 7184182 8.6.0.6
CVE-2024-21235 7181926 8.6.0.6
CVE-2024-21217 7181926 8.6.0.6
CVE-2024-21210 7181926 8.6.0.6
CVE-2024-21208 7181926 8.6.0.6
CVE-2024-10917 7181926 8.6.0.6
CVE-2023-29483 7181927 8.6.0.6
CVE-2024-1737 7181928 8.6.0.6
CVE-2024-1975 7181928 8.6.0.6
CVE-2023-52881 7181929 8.6.0.6
CVE-2024-21131 7166856 8.6.0.5
CVE-2023-1073 7161786 8.6.0.5
CVE-2023-45871 7161786 8.6.0.5
CVE-2023-6356 7161786 8.6.0.5
CVE-2023-6535 7161786 8.6.0.5
CVE-2023-6536 7161786 8.6.0.5
CVE-2023-1206 7161786 8.6.0.5
CVE-2023-5178 7161786 8.6.0.5
CVE-2024-2961 7161779 8.6.0.5
CVE-2023-50387 7161793 8.6.0.5
CVE-2023-50868 7161793 8.6.0.5
CVE-2020-28241 7161793 8.6.0.5
CVE-2023-4408 7161793 8.6.0.5
CVE-2023-48795 7154643 8.6.0.5
CVE-2023-44487 7156535 8.6.0.4
CVE-2023-1667 7156535 8.6.0.4
CVE-2023-2283 7156535 8.6.0.4
CVE-2024-20952 7156536 8.6.0.4
CVE-2024-20918 7156536 8.6.0.4
CVE-2024-20921 7156536 8.6.0.4
CVE-2024-20919 7156536 8.6.0.4
CVE-2024-20926 7156536 8.6.0.4
CVE-2024-20945 7156536 8.6.0.4
CVE-2023-33850 7156536 8.6.0.4
CVE-2024-23672 7156538 8.6.0.4
CVE-2024-24549 7156538 8.6.0.4
CVE-2023-44487 7156539 8.6.0.4
CVE-2024-25710 7156539 8.6.0.4
CVE-2024-26308 7156539 8.6.0.4
CVE-2023-48795 7154643 8.6.0.3
CVE-2023-47700 7114767 8.6.0.3
CVE-2023-46589 7114769 8.6.0.3
CVE-2023-45648 7114769 8.6.0.3
CVE-2023-42795 7114769 8.6.0.3
CVE-2024-21733 7114769 8.6.0.3
CVE-2023-22081 7114770 8.6.0.3
CVE-2023-22067 7114770 8.6.0.3
CVE-2023-5676 7114770 8.6.0.3
CVE-2023-43042 7064976 8.6.0.2
CVE-2023-21930 7065011 8.6.0.2
CVE-2023-21937 7065011 8.6.0.2
CVE-2023-21938 7065011 8.6.0.2
CVE-2023-34396 7065010 8.6.0.2
CVE-2023-50164 7114768 8.6.0.2
CVE-2023-27870 6985697 8.6.0.0

3.2 APARs Resolved

Show details for all APARs
APAR
Affected Products
Severity
Description
Resolved in
Feature Tags
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
SVAPAR-155568 FS9500, SVC Critical On FS9500 or SV3 systems, batteries may prematurely hit end of life and go offline. (show details) 8.6.0.7 No Specific Feature
SVAPAR-157007 All Critical On heavily loaded systems, a dual node warmstart may occur after an upgrade to 8.7.3.0, 8.7.0.3, or 8.6.0.6 due to an internal memory allocation issue causing brief loss of access to the data. (show details) 8.6.0.7 System Update
SVAPAR-138832 All High Importance Nodes using IP replication with compression may experience multiple node warmstarts due to a timing window in error recovery. (show details) 8.6.0.7 IP Replication
SVAPAR-153246 All High Importance A hung condition in Remote Receive IOs (RRI) for volume groups can lead to warmstarts on multiple nodes. (show details) 8.6.0.7 Policy-based Replication
SVAPAR-134589 FS9500 HIPER A problem with NVMe drives on FlashSystem 9500 may impact node to node communication over the PCIe bus. This may lead to a temporary array offline. (show details) 8.6.0.6 Drives
SVAPAR-140781 All Critical Successful login attempts to the configuration node via SSH are not communicated to the remote syslog server. Service assistant and GUI logins are correctly reported. (show details) 8.6.0.6 Security
SVAPAR-141306 All Critical Changing the preferred node of a volume could trigger a cluster recovery causing brief loss of access to data (show details) 8.6.0.6 3-Site using HyperSwap or Metro Mirror, Global Mirror, Global Mirror With Change Volumes, HyperSwap, Metro Mirror, Policy-based Replication
SVAPAR-141920 All Critical Under specific scenarios, adding a snapshot to a volume group could trigger a cluster recovery causing brief loss of access to data. (show details) 8.6.0.6 FlashCopy
SVAPAR-142287 All Critical Loss of access to data when running certain snapshot commands at the exact time that a Volume Group Snapshots is stopping (show details) 8.6.0.6 Snapshots
SVAPAR-143890 All Critical If a HyperSwap volume is expanded shortly after disabling 3-site replication, the expandvolume command may fail to complete. This will lead to a loss of configuration access. (show details) 8.6.0.6 3-Site using HyperSwap or Metro Mirror, FlashCopy, HyperSwap
SVAPAR-147646 FS5000, FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500, SVC Critical Node goes offline when a non-fatal PCIe error on the fibre channel adapter is encountered. It's possible for this to occur on both nodes simultaneously. (show details) 8.6.0.6 Fibre Channel
SVAPAR-147906 SVC Critical All nodes may warmstart in a SAN Volume Controller cluster consisting of SV3 nodes under heavy load, if a reset occurs on a Fibre Channel adapter used for local node to node communication. (show details) 8.6.0.6 Inter-node messaging
SVAPAR-148987 SVC Critical SVC model SV1 nodes running 8.5.0.13 may be unable to access keys from USB sticks when using USB encryption (show details) 8.6.0.6 Encryption
SVAPAR-149983 All Critical During an upgrade from 8.5.0.10 or higher to 8.6.0 or higher, a medium error on a quorum disk may cause a node warmstart. If the partner node is offline at the same time, this may cause loss of access. (show details) 8.6.0.6 System Update
SVAPAR-151639 All Critical If Two-Person Integrity is in use, multiple node warmstarts may occur when removing a user with remote authentication and an SSH key. (show details) 8.6.0.6 LDAP
SVAPAR-131999 All High Importance Single node warmstart when an NVMe host disconnects from the storage (show details) 8.6.0.6 NVMe
SVAPAR-136677 All High Importance An unresponsive DNS server may cause a single node warmstart and the email process to get stuck. (show details) 8.6.0.6 System Monitoring
SVAPAR-138214 All High Importance When a volume group is assigned to an ownership group, creating a snapshot and populating a new volume group from the snapshot will cause a warmstart of the configuration node when 'lsvolumepopulation' is run. (show details) 8.6.0.6 FlashCopy
SVAPAR-139247 All High Importance Very heavy write workload to a thin-provisioned volume may cause a single-node warmstart, due to a low-probability deadlock condition. (show details) 8.6.0.6 Thin Provisioning
SVAPAR-144000 All High Importance A high number of abort commands from an NVMe host in a short time may cause a Fibre Channel port on the storage to go offline, leading to degraded hosts. (show details) 8.6.0.6 Hosts
SVAPAR-144036 FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500 High Importance Replacement of an industry standard NVMe drive may fail until both nodes are warmstarted. (show details) 8.6.0.6 Reliability Availability Serviceability
SVAPAR-144068 All High Importance If a volume group snapshot is created at the same time as an existing snapshot is deleting, all nodes may warmstart, causing a loss of access to data. This can only happen if there is insufficient FlashCopy bitmap space for the new snapshot. (show details) 8.6.0.6 Snapshots
SVAPAR-144070 All High Importance After changing the system name, the iSCSI IQNs may still contain the old system name. (show details) 8.6.0.6 iSCSI
SVAPAR-144272 All High Importance IO processing unnecessarily stalled for several seconds following a node coming online (show details) 8.6.0.6 Performance
SVAPAR-147361 All High Importance If a software upgrade completes at the same time as performance data is being sent to IBM Storage Insights, a single node warmstart may occur. (show details) 8.6.0.6 Call Home, System Monitoring
SVAPAR-151975 All High Importance In systems using IP replication, a CPU resource allocation change introduced in 8.6.0.0 release could cause delays in node to node communication, affecting overall write performance. (show details) 8.6.0.6 IP Replication, Performance
SVAPAR-152019 All High Importance A single node assert may occur, potentially leading to the loss of the config node, when running the rmfcmap command with the force flag enabled. This can happen if a vdisk used by both FlashCopy and Remote Copy was previously moved between I/O groups. (show details) 8.6.0.6 FlashCopy
SVAPAR-123614 SVC Suggested 1300 Error in the error log when a node comes online, caused by a delay between bringing up the physical FC ports and the virtual FC ports (show details) 8.6.0.6 Hot Spare Node
SVAPAR-128414 FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500, SVC Suggested Thin-clone volumes in a Data Reduction Pool will incorrectly have compression disabled, if the source volume was uncompressed. (show details) 8.6.0.6 Compression, FlashCopy
SVAPAR-138286 All Suggested If a direct-attached controller has NPIV enabled, 1625 errors will incorrectly be logged, indicating a controller misconfiguration. (show details) 8.6.0.6 Backend Storage
SVAPAR-138859 FS5000, FS5100, FS5200 Suggested Collecting a Type 4 support package (Snap Type 4: Standard logs plus new statesaves) in the GUI can trigger an out of memory event causing the GUI process to be killed. (show details) 8.6.0.6 Support Data Collection
SVAPAR-139943 All Suggested A single node warmstart may occur when a host sends a high number of unexpected Fibre Channel frames. (show details) 8.6.0.6 Fibre Channel
SVAPAR-140588 All Suggested A node warmstart may occur due to incorrect processing of NVMe host I/O offload commands (show details) 8.6.0.6 NVMe
SVAPAR-142194 All Suggested GUI volume creation does not honour the preferred node that was selected. (show details) 8.6.0.6 Graphical User Interface
SVAPAR-143574 All Suggested It is possible for a battery register read to fail, causing a battery to unexpectedly be reported as offline. The issue will persist until the node is rebooted. (show details) 8.6.0.6 Reliability Availability Serviceability
SVAPAR-144271 SVC Suggested An offline node that is protected by a spare node may take longer than expected to come online. This may result in a temporary loss of Fibre Channel connectivity to the hosts (show details) 8.6.0.6 Hot Spare Node
SVAPAR-151965 All Suggested The time zone in performance XML files is displayed incorrectly for some timezones during daylight savings time. This can impact performance monitoring tools such as Storage Insights. (show details) 8.6.0.6 System Monitoring
SVAPAR-156179 All Suggested The supported length of client secret for SSO and MFA configurations is limited to 64 characters. (show details) 8.6.0.6 Security
SVAPAR-130438 All HIPER Upgrading a system to 8.6.2 or higher with a single portset assigned to an IP replication partnership may cause all nodes to warmstart when making a change to the partnership. (show details) 8.6.0.5 IP Replication
SVAPAR-115129 All Critical A node can warmstart when its I/O group partner node is removed due to an internal software counter discrepancy. This can lead to temporary loss of access. (show details) 8.6.0.5 Data Reduction Pools
SVAPAR-117005 All Critical A system may run an automatic cluster recovery, and warmstart all nodes, if Policy-based Replication is disabled on the partnership before removing the replication policy. (show details) 8.6.0.5 Policy-based Replication
SVAPAR-120397 All Critical A node may not shutdown cleanly on loss of power if it contains 25Gb Ethernet adapters, necessitating a system recovery. (show details) 8.6.0.5 Reliability Availability Serviceability
SVAPAR-128912 All Critical A T2 recovery may occur when attempting to take a snapshot from a volume group that contains volumes from multiple I/O groups, and one of the I/O groups is offline. (show details) 8.6.0.5 FlashCopy, Safeguarded Copy & Safeguarded Snapshots
SVAPAR-130553 All Critical Converting a 3-Site AuxFar volume to HyperSwap results in multiple node asserts (show details) 8.6.0.5 3-Site using HyperSwap or Metro Mirror, HyperSwap
SVAPAR-131228 All Critical A RAID array temporarily goes offline due to delays in fetching the encryption key when a node starts up. (show details) 8.6.0.5 Distributed RAID, Encryption, RAID
SVAPAR-131259 All Critical Removal of the replication policy after the volume group was set to be independent exposed an issue that resulted in the FlashCopy internal state becoming incorrect, this meant subsequent FlashCopy actions failed incorrectly. (show details) 8.6.0.5 FlashCopy, Policy-based Replication
SVAPAR-131648 All Critical Multiple node warmstarts may occur when starting an incremental FlashCopy map that uses a replication target volume as its source, and the change volume is used to keep a consistent image. (show details) 8.6.0.5 FlashCopy, Policy-based Replication
SVAPAR-136427 All Critical When deleting multiple older snapshots versions, whilst simultaneously creating new snapshots, the system can run out of bitmap space resulting in a bad snapshot map, repeated asserts, and a loss of access. (show details) 8.6.0.5 FlashCopy
SVAPAR-141094 All Critical On power failure, FS50xx systems with a 25Gb ROCE adapters may fail to gracefully shutdown, causing loss of cache data. (show details) 8.6.0.5 Reliability Availability Serviceability
HU02159 All High Importance A rare issue caused by unexpected I/O in the upper cache can cause a node to warmstart (show details) 8.6.0.5 Cache
SVAPAR-111173 All High Importance Loss of access when two drives experience slowness at the same time (show details) 8.6.0.5 RAID
SVAPAR-117457 All High Importance A hung condition in Remote Receive IOs (RRI) for volume groups can lead to warmstarts on multiple nodes. (show details) 8.6.0.5 Policy-based Replication
SVAPAR-119799 FS9500, SVC High Importance Inter-node resource queuing on SV3 I/O groups, causes high write response time. (show details) 8.6.0.5 Performance
SVAPAR-120630 All High Importance An MDisk may go offline due to I/O timeouts caused by an imbalanced workload distribution towards the resources in DRP, whilst FlashCopy is running at a high copy rate within DRP, and the target volume is dedup. (show details) 8.6.0.5 Data Reduction Pools
SVAPAR-127845 All High Importance Attempting to create a second I/O group, in the two `Caching I/O Group` dropdowns on the `Define Volume Properties` modal of `Create Volumes` results in error `CMMVC8709E the iogroups of cache memory storage are not in the same site as the storage groups`. (show details) 8.6.0.5 GUI Fix Procedure, Graphical User Interface
SVAPAR-127869 All High Importance Multiple node warmstarts may occur, due to a rarely seen timing window, when quorum disk I/O is submitted but there is no backend mdisk Logical Unit association that has been discovered by the agent for that quorum disk. (show details) 8.6.0.5 Quorum
SVAPAR-128914 All High Importance A CMMVC9859E error will occur when trying to use 'addvolumecopy' to create Hyperswap volume from a VDisk with existing snapshots (show details) 8.6.0.5 HyperSwap
SVAPAR-129318 All High Importance A Storage Virtualize cluster configured without I/O group 0 is unable to send performance metrics (show details) 8.6.0.5 Performance
SVAPAR-131651 All High Importance Policy-based Replication got stuck after both nodes in the I/O group on a target system restarted at the same time (show details) 8.6.0.5 Policy-based Replication
SVAPAR-137265 All High Importance Error when attempting to delete a HyperSwap volume with snapshots (show details) 8.6.0.5 FlashCopy
SVAPAR-141996 All High Importance Policy-based replication may not perform the necessary background synchronization to maintain an up to date copy of data at the DR site. (show details) 8.6.0.5 Policy-based Replication
SVAPAR-89331 All High Importance Systems running 8.5.2 or higher using IP replication with compression may have low replication bandwidth and high latency due to an issue with the way the data is compressed. (show details) 8.6.0.5 IP Replication
SVAPAR-111991 All Suggested Attempting to create a truststore fails with a CMMVC5711E error if the certificate file does not end with a newline character (show details) 8.6.0.5 IP Replication, Policy-based Replication, vVols
SVAPAR-114145 All Suggested A timing issue triggered by disabling an IP partnership's compression state while replication is running may cause a node to warmstart. (show details) 8.6.0.5 IP Replication
SVAPAR-127835 All Suggested A node may warmstart due to invalid RDMA receive size of zero. (show details) 8.6.0.5 NVMe
SVAPAR-129274 All Suggested When running the 'mkvolumegroup' command, a warmstart of the Config node may occur. (show details) 8.6.0.5 FlashCopy, Thin Provisioning
SVAPAR-131212 All Suggested The GUI partnership properties dialog crashes if the issuer certificate does not have an organization field (show details) 8.6.0.5 Policy-based Replication
SVAPAR-131807 All Suggested The orchestrator for Policy-Based Replication is not running, preventing replication from being configured. Attempting to configure replication may cause a single node warmstart. (show details) 8.6.0.5 Policy-based Replication
SVAPAR-131865 All Suggested A system may encounter communication issues when being configured with IPv6. (show details) 8.6.0.5
SVAPAR-131994 All Suggested When implementing Safeguarded Copy, the associated child pool may run out of space, which can cause multiple Safeguarded Copies to go offline. This can cause the node to warmstart. (show details) 8.6.0.5 Safeguarded Copy & Safeguarded Snapshots
SVAPAR-132011 All Suggested In rare situations, a hosts WWPN may show incorrectly as still logged into the storage even though it is not. This can cause the host to incorrectly appear as degraded. (show details) 8.6.0.5 Fibre Channel, Reliability Availability Serviceability
SVAPAR-132072 All Suggested A node may assert due to a Fibre Channel port constantly flapping between the FlashSystem and the host. (show details) 8.6.0.5 Fibre Channel
SVAPAR-135000 All Suggested A low-probability timing window in memory management code may cause a single-node warmstart at upgrade completion. (show details) 8.6.0.5 System Update
SVAPAR-135742 All Suggested A temporary network issue may cause unexpected 1585 DNS connection errors after upgrading to 8.6.0.4, 8.6.3.0 or 8.7.0.0. This is due to a shorter DNS request timeout in these PTFs. (show details) 8.6.0.5 Reliability Availability Serviceability
SVAPAR-136172 All Suggested VMware vCentre reports a disk expansion failure, prior to changing the provisioning policy. (show details) 8.6.0.5 vVols
SVAPAR-137241 All Suggested When attempting to create a Hyperswap volume via the GUI, when the preferred site is in the secondary data centre, a CMMVC8709E 'the iogroups of cache memory storage are not in the same site as the storage groups' failure occurs. (show details) 8.6.0.5 GUI Fix Procedure, Graphical User Interface
SVAPAR-137244 All Suggested In rare circumstances, an internal issue with the GUI backend sorting algorithm can display the following error - 'An error occurred while loading table data' (show details) 8.6.0.5 GUI Fix Procedure, Graphical User Interface
SVAPAR-137906 All Suggested A node warmstart may occur due to a timeout caused by FlashCopy bitmap cleaning, leading to a stalled software upgrade. (show details) 8.6.0.5 FlashCopy, System Update
SVAPAR-138418 All Suggested Snap collections triggered by Storage Insights over cloud callhome time out before they have completed (show details) 8.6.0.5 Reliability Availability Serviceability
SVAPAR-140994 All Suggested Expanding a volume via the GUI fails with CMMVC7019E because the volume size is not a multiple of 512 bytes. (show details) 8.6.0.5 Reliability Availability Serviceability
SVAPAR-141001 All Suggested Unexpected error CMMVC9326E when adding either a port to host or creating a host. (show details) 8.6.0.5 Hosts
SVAPAR-141019 All Suggested The GUI crashed when a user group with roles 3SiteAdmin and remote users exist (show details) 8.6.0.5 3-Site using HyperSwap or Metro Mirror, Graphical User Interface
SVAPAR-141559 All Suggested GUI shows: 'error occurred loading table data.' in the volume view after the first login attempt to the GUI. Volumes will be visible inside the 'Volumes by Pool'. This is evoked if we create volumes with either a number between dashes, or numbers after dashes, or other characters after numbers (show details) 8.6.0.5 Graphical User Interface
SVAPAR-116592 All HIPER If a V5000E or a Flashsystem 5000 is configured with multiple compressed IP partnerships, and one or more of the partnerships is with a non V5000E or FS5000, it may repeatedly warmstart due to a lack of compression resources. (show details) 8.6.0.4 IP Replication
SVAPAR-131567 FS7300, FS9500, SVC HIPER Node goes offline and enters service state when collecting diagnostic data for 100Gb/s adapters. (show details) 8.6.0.4
SVAPAR-132123 All HIPER Vdisks can go offline after a T3 with an expanding DRAID1 array evokes some IO errors and data corruption (show details) 8.6.0.4 RAID
SVAPAR-111444 All Critical Direct attached fibre channel hosts may not log into the NPIV host port due to a timing issue with the Registered State Change Notification (RSCN). (show details) 8.6.0.4 Host Cluster, Hosts
SVAPAR-112939 All Critical A loss of disk access on one pool may cause IO to hang on a different pool due to a cache messaging hang. (show details) 8.6.0.4 Cache
SVAPAR-115478 FS7300 Critical An issue in the thin-provisioning component may cause a node warmstart during upgrade from pre-8.5.4 to 8.5.4 or later. (show details) 8.6.0.4 Thin Provisioning
SVAPAR-115505 All Critical Expanding a volume in a Flashcopy map and then creating a dependent incremental forward and reverse Flashcopy map may cause a dual node warmstart when the incremental map is started. (show details) 8.6.0.4 FlashCopy
SVAPAR-120610 All Critical Loss of access to data when changing the properties of a FlashCopy Map while the map is being deleted (show details) 8.6.0.4 FlashCopy
SVAPAR-123874 All Critical There is a timing window when using async-PBR or RC GMCV, with Volume Group snapshots, which results in the new snapshot VDisk mistakenly being taken offline, forcing the production volume offline for a brief period. (show details) 8.6.0.4 Global Mirror With Change Volumes, Policy-based Replication
SVAPAR-123945 All Critical If a system SSL certificate is installed with the extension CA True it may trigger multiple node warmstarts. (show details) 8.6.0.4 Encryption
SVAPAR-126767 All Critical Upgrading to 8.6.0 when iSER clustering is configured, may cause multiple node warmstarts to occur, if node canisters have been swapped between slots since the system was manufactured. (show details) 8.6.0.4 iSCSI
SVAPAR-127836 All Critical Running some Safeguarded Copy commands can cause a cluster recovery in some platforms. (show details) 8.6.0.4 Safeguarded Copy & Safeguarded Snapshots
SVAPAR-128052 All Critical A node assert may occur if a host sends a login request to a node when the host is being removed from the cluster with the '-force' parameter. (show details) 8.6.0.4 Hosts, NVMe
SVAPAR-128626 All Critical A node may warmstart or fail to start FlashCopy maps, in volume groups that contain Remote Copy primary and secondary volumes, or both copies of a Hyperswap volume. (show details) 8.6.0.4 FlashCopy, Global Mirror, Global Mirror With Change Volumes, HyperSwap, Metro Mirror
SVAPAR-129298 All Critical Manage disk group went offline during queueing of fibre rings on the overflow list causing the node to assert. (show details) 8.6.0.4 RAID
SVAPAR-93709 FS9500 Critical A problem with NVMe drives may impact node to node communication over the PCIe bus. This may lead to a temporary array offline. (show details) 8.6.0.4 Drives, RAID
SVAPAR-108715 All High Importance The Service Assistant GUI on 8.5.0.0 and above incorrectly performs actions on the local node instead of the node selected in the GUI. (show details) 8.6.0.4 Graphical User Interface
SVAPAR-110743 All High Importance Email becoming stuck in the mail queue caused a delay in the 'upgrade commit was finished' message being sent, therefore causing 3 out of 4 nodes to warmstart, and then rejoin the cluster automatically within less than three minutes. (show details) 8.6.0.4 System Update
SVAPAR-110765 All High Importance In a 3-Site configuration, the Config node can be lost if the 'stopfcmap or 'stopfcconsistgrp ' commands are run with the '-force' parameter (show details) 8.6.0.4 3-Site using HyperSwap or Metro Mirror
SVAPAR-112856 All High Importance Conversion of Hyperswap volumes to 3 site consistency groups will increase write response time of the Hyperswap volumes. (show details) 8.6.0.4 3-Site using HyperSwap or Metro Mirror, HyperSwap
SVAPAR-115021 All High Importance Software validation checks can trigger a T2 recovery when attempting to move a Hyperswap vdisk into and out of the nocachingiogrp state. (show details) 8.6.0.4 HyperSwap
SVAPAR-127063 All High Importance Degraded Remote Copy performance on systems with multiple IO groups running 8.5.0.11 or 8.6.0.3 after a node restarts (show details) 8.6.0.4 Global Mirror, Global Mirror With Change Volumes, HyperSwap, Metro Mirror, Performance
SVAPAR-127841 All High Importance A slow I/O resource leak may occur when using FlashCopy, and the system is under high workload. This may cause a node warmstart to occur (show details) 8.6.0.4 FlashCopy
SVAPAR-128228 All High Importance The NTP daemon may not synchronise after upgrading from 8.3.x to 8.5.x (show details) 8.6.0.4
SVAPAR-130731 All High Importance During installation, a single node assert at the end of the software upgrade process may occur (show details) 8.6.0.4 System Update
SVAPAR-108469 All Suggested A single node warmstart may occur on nodes configure to use a secured IP partnership (show details) 8.6.0.4 IP Replication
SVAPAR-111021 All Suggested Unable to load resource page in GUI if the IO group ID:0 does not have any nodes. (show details) 8.6.0.4 System Monitoring
SVAPAR-111992 All Suggested Unable to configure policy-based Replication using the GUI, if truststore contains blank lines or CRLF line endings (show details) 8.6.0.4 Graphical User Interface, Policy-based Replication
SVAPAR-113792 All Suggested Node assert may occur when outbound IPC message such as nslookup to a DNS server timeouts (show details) 8.6.0.4
SVAPAR-114081 All Suggested The lsfabric command may show FC port logins which no longer exist. In large environments with many devices attached to the SAN, this may result in an incorrect 1800 error being reported, indicating that a node has too many logins. (show details) 8.6.0.4 Reliability Availability Serviceability
SVAPAR-120156 FS5000, FS5100, FS5200, FS7200, FS7300, SVC Suggested An internal process introduced in 8.6.0 to collect iSCSI port statistics can cause host performance to be affected (show details) 8.6.0.4 Performance, iSCSI
SVAPAR-120399 All Suggested A host WWPN incorrectly shows as being still logged into the storage when it is not. (show details) 8.6.0.4 Reliability Availability Serviceability
SVAPAR-120495 All Suggested A node can experience performance degradation, if using the embedded VASA provider, thereby leading to a potential single node warmstart. (show details) 8.6.0.4
SVAPAR-120639 All Suggested The vulnerability scanner claims cookies were set without HttpOnly flag. (show details) 8.6.0.4
SVAPAR-121334 All Suggested Packets with unexpected size are received on the ethernet interface. This causes the internal buffers to become full, thereby causing a node to warmstart to clear the condition (show details) 8.6.0.4 NVMe
SVAPAR-122411 All Suggested A node may assert when a vdisk has been expanded and rehome has not made aware of the possible change of the number of regions it may have to rehome. (show details) 8.6.0.4 Data Reduction Pools
SVAPAR-123644 All Suggested A system with NVMe drives may falsely log an error indicating a Flash drive has high write endurance usage. The error cannot be cleared. (show details) 8.6.0.4 Reliability Availability Serviceability
SVAPAR-126742 All Suggested A 3400 error (too many compression errors) may be logged incorrectly, due to an incorrect threshold. The error can be ignored on code levels which do not contain this fix. (show details) 8.6.0.4 Compression, Data Reduction Pools
SVAPAR-127844 All Suggested The user is informed that a snapshot policy cannot be assigned. The error message CMMVC9893E is displayed. (show details) 8.6.0.4 FlashCopy
SVAPAR-127908 All Suggested A volume mapped to a NVMe host cannot be mapped to another NVMe host via the GUI, however it is possible via the CLI. In addition, when a host is removed from a host cluster, it is not possible to add it back using the GUI (show details) 8.6.0.4 GUI Fix Procedure, Graphical User Interface, Host Cluster, Hosts, NVMe
SVAPAR-129111 All Suggested When using the GUI, the IPV6 field is not wide enough, thereby causing the user to scroll right to see the full IPV6 address. (show details) 8.6.0.4 Graphical User Interface
SVAPAR-130729 All Suggested When upgrading to 850, remote users configured with public keys do not failback to password prompt, if a key is not available. (show details) 8.6.0.4 Security
SVAPAR-107547 All Critical If there are more than 64 logins to a single Fibre Channel port, and a switch zoning change is made, a single node warmstart may occur. (show details) 8.6.0.3 Reliability Availability Serviceability
SVAPAR-111705 All Critical If a Volume Group Snapshot fails and the system has 'snapshotpreserveparent' set to 'yes', this may trigger multiple node warmstarts. (show details) 8.6.0.3 FlashCopy
SVAPAR-112107 FS9500, SVC Critical There is an issue that affects PSU firmware upgrades in FS9500 and SV3 systems that can cause an outage. This happens when one PSU fails to download the firmware and another PSU starts to download the firmware. It is a very rare timing window that can be triggered if two PSUs are reseated close in time during the firmware upgrade process. (show details) 8.6.0.3 System Update
SVAPAR-112707 SVC Critical Marking error 3015 as fixed on a SVC cluster containing SV3 nodes may cause a loss of access to data. For more details refer to this Flash (show details) 8.6.0.3 Reliability Availability Serviceability
SVAPAR-115136 FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500 Critical Failure of an NVMe drive has a small probability of triggering a PCIe credit timeout in a node canister, causing the node to reboot. (show details) 8.6.0.3 Drives
IT41447 All High Importance When removing the DNS server configuration, a node may discover unexpected metadata and warmstart (show details) 8.6.0.3 Reliability Availability Serviceability
SVAPAR-110426 All High Importance When a security admin other than superuser runs security patch related commands 'lspatch' and 'lssystempatches' this can cause a node to warmstart (show details) 8.6.0.3 Security
SVAPAR-110819 & SVAPAR-113122 All High Importance A single-node warmstart may occur when a Fibre Channel port is disconnected from one fabric, and added to another. This is caused by a timing window in the FDMI discovery process. (show details) 8.6.0.3 Fibre Channel
SVAPAR-111812 All High Importance Systems with 8.6.0 or later software may fail to complete lsvdisk commands, if a single SSH session runs multiple lsvdisk commands piped to each other. This can lead to failed login attempts for the GUI and CLI, and is is more likely to occur if the system has more than 400 volumes. (show details) 8.6.0.3 Command Line Interface
SVAPAR-112525 All High Importance A node assert can occur due to a resource allocation issue in a small timing window when using Remote Copy (show details) 8.6.0.3 Global Mirror, Global Mirror With Change Volumes, HyperSwap, Metro Mirror
SVAPAR-117768 All High Importance Cloud Callhome may stop working without logging an error (show details) 8.6.0.3 Call Home
SVAPAR-102382 All Suggested Fibre Channel Read Diagnostic Parameters (RDP) indicates that a short wave SFP is installed when infact an long wave SFP is installed. (show details) 8.6.0.3 System Monitoring
SVAPAR-105955 All Suggested Single node warmstart during link recovery when using a secured IP partnership. (show details) 8.6.0.3 IP Replication
SVAPAR-108551 All Suggested An expired token in the GUI file upload process can cause the upgrade to not start automatically after the file is successfully uploaded. (show details) 8.6.0.3 System Update
SVAPAR-112711 All Suggested IBM Storage Virtualize user interface code will not respond to a malformed HTTP POST with expected HTTP 401 message. (show details) 8.6.0.3 Graphical User Interface
SVAPAR-112712 SVC Suggested The Cloud Call Home function will not restart on SVC clusters that were initially created with CG8 hardware and upgraded to 8.6.0.0 and above. (show details) 8.6.0.3 Call Home
SVAPAR-117179 All Suggested Snap data collection does not collect an error log if the superuser password requires a change (show details) 8.6.0.3 Support Data Collection
SVAPAR-117781 All Suggested A single node warmstart may occur during Fabric Device Management Interface (FDMI) discovery if a virtual WWPN is discovered on a different physical port than is was previously. (show details) 8.6.0.3 Hosts
SVAPAR-105861 SVC HIPER A cluster recovery may occur when an attempt is made to create a mirrored snapshot with insufficient volume mirroring bitmap space in the IO group (show details) 8.6.0.2 FlashCopy, Safeguarded Copy & Safeguarded Snapshots, Volume Mirroring
SVAPAR-104533 All Critical Systems that encounter multiple node asserts, followed by a system T3 recovery, may experience errors repairing Data Reduction Pools (show details) 8.6.0.2 Data Reduction Pools
SVAPAR-105430 All Critical When hardware compression is suspended mid IO to a DRP compressed volume, it may cause the IO to hang until an internal timeout is hit and a node warmstarts. (show details) 8.6.0.2 Compression, Data Reduction Pools
SVAPAR-107270 All Critical If an upgrade from a level below 8.6.x, to 8.6.0 or 8.6.1 commits, whilst FlashCopy is preparing to start a map, a bad state is introduced that prevents the FlashCopy maps from starting. (show details) 8.6.0.2 Global Mirror With Change Volumes, Policy-based Replication
SVAPAR-107734 All Critical When issuing IO to an incremental fcmap volume that is in a stopped state, but has recently been expanded, and it also has a partner fcmap, may cause the nodes to go into a restart. (show details) 8.6.0.2 FlashCopy
SVAPAR-114899 All Critical Out of order snapshot stopping can cause stuck cleaning processes to occur, following Policy-based Replication cycling. This manifests as extremely high CPU utilization on multiple CPU cores, causing excessively high volume response times. (show details) 8.6.0.2 Policy-based Replication
SVAPAR-104159 All High Importance Nodes configured with 32GB or less of RAM, and specific 25Gb ethernet adapters, under some circumstances may run out of memory. This can cause a single node warmstart. (show details) 8.6.0.2 Reliability Availability Serviceability
SVAPAR-104250 All High Importance There is an issue whereby NVMe CaW (Compare and Write) commands can incorrectly go into an invalid state, thereby causing the node to assert to clear the bad condition (show details) 8.6.0.2 Hosts, NVMe
SVAPAR-105727 All High Importance An upgrade within the 8.5.0 release stream from 8.5.0.5 or below, to 8.5.0.6 or above, can cause an assert of down-level nodes during the upgrade, if volume mirroring is heavily utilised (show details) 8.6.0.2 Volume Mirroring
SVAPAR-106874 FS5200, FS7200, FS7300, FS9100, FS9200, FS9500, SVC High Importance A timing window may cause a single node warmstart, while recording debug information about a replicated host write. This can only happen on a system using Policy Based Replication. (show details) 8.6.0.2 Policy-based Replication
SVAPAR-99997 All High Importance Creating a volume group from a snapshot whose index is greater than 255 may cause incorrect output from 'lsvolumegroup' (show details) 8.6.0.2 FlashCopy
SVAPAR-102271 All Suggested Enable IBM Storage Defender integration for Data Reduction Pools (show details) 8.6.0.2 Interoperability
SVAPAR-106693 FS9500 Suggested Remote Support Assistance (RSA) cannot be enabled on FS9500 systems with MTM 4983-AH8 (show details) 8.6.0.2 Support Remote Assist
SVAPAR-107558 All Suggested A Volume Group Snapshot (VGS) trigger may collide with a GMCV or Policy based Replication cycle causing the VGS trigger to fail. (show details) 8.6.0.2 FlashCopy, Global Mirror With Change Volumes, Policy-based Replication
SVAPAR-107595 FS7300, FS9100, FS9200, FS9500, SVC Suggested Improve maximum throughput for Global Mirror, Metro Mirror and Hyperswap by providing more inter-node messaging resources (show details) 8.6.0.2 Global Mirror, HyperSwap, Metro Mirror, Performance
SVAPAR-107733 All Suggested The 'mksnmpserver' command fails with 'CMMVC5711E [####] is not valid data' if auth passphrase contains special characters, such as '!' (show details) 8.6.0.2
SVAPAR-109289 All Suggested Buffer overflow may occur when handling the maximum length of 55 characters for either Multi-Factor Authentication (MFA) or Single Sign On (SSO) client secrets (show details) 8.6.0.2 Backend Storage
SVAPAR-98576 All Suggested Customers cannot edit certain properties of a flashcopy mapping via the GUI flashcopy mappings panel as the edit modal does not appear. (show details) 8.6.0.2 FlashCopy, Graphical User Interface
SVAPAR-103696 All HIPER When taking a snapshot of a volume that is being replicated to another system using Policy Based Replication, the snapshot may contain data from an earlier point in time than intended (show details) 8.6.0.1 FlashCopy, Policy-based Replication
SVAPAR-94179 FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500 HIPER Faulty hardware within or connected to the CPU can result in a reboot on the affected node. However it is possible for this to sometimes result in a reboot on the partner node (show details) 8.6.0.1 Reliability Availability Serviceability
HU02585 All Critical An unstable connection between the Storage Virtualize system and an external virtualized storage system can sometimes result in a cluster recovery occurring (show details) 8.6.0.1 Backend Storage
SVAPAR-100127 All Critical The Service Assistant GUI Node rescue option incorrectly performs the node rescue on the local node instead of the node selected in the GUI. (show details) 8.6.0.1 Graphical User Interface
SVAPAR-100564 All Critical On code level 8.6.0.0, multiple node warmstarts will occur if a user attempts to remove the site ID from a host that has Hyperswap volumes mapped to it. (show details) 8.6.0.1 HyperSwap
SVAPAR-98184 All Critical When a Volume Group Snapshot clone is added to a replication policy before the clone is complete, the system may repeatedly warmstart when the Policy-based Replication volume group is changed to independent access (show details) 8.6.0.1 FlashCopy, Policy-based Replication
SVAPAR-98612 All Critical Creating a volume group snapshot with an invalid I/O group value may trigger multiple node warmstarts (show details) 8.6.0.1 FlashCopy
SVAPAR-98672 All Critical VMWare host crashes on servers connected using NVMe over Fibre Channel with the host_unmap setting disabled (show details) 8.6.0.1 NVMe
SVAPAR-100162 All High Importance Some host vendors such as Windows have recently started to use 'mode select page 7'. IBM Storage does not support this mode. If the storage received this mode level it would cause a warmstart to occur (show details) 8.6.0.1 Hosts
SVAPAR-100977 All High Importance When a zone containing NVMe devices is enabled, a node warmstart might occur. (show details) 8.6.0.1 NVMe
SVAPAR-102573 All High Importance On systems using Policy-Based Replication and Volume Group Snapshots, some CPU cores may have high utilization due to an issue with the snapshot cleaning algorithm. This can impact performance for replication and host I/O (show details) 8.6.0.1 Policy-based Replication
SVAPAR-98497 All High Importance Excessive SSH logging may cause the Configuration node boot drive to become full. The node will go offline with error 565, indicating a boot drive failure (show details) 8.6.0.1 System Monitoring
SVAPAR-98893 All High Importance If an external storage controller has over-provisioned storage (for example a FlashSystem with an FCM array), the system may incorrectly display usable capacity data for mdisks from that controller. If connectivity to the storage controller is lost, node warmstarts may occur (show details) 8.6.0.1 Storage Virtualisation
SVAPAR-99175 All High Importance A node may warmstart due to an invalid queuing mechanism in cache. This can cause IO in cache to be in the same processing queue more than once. (show details) 8.6.0.1 Cache
SVAPAR-99354 All High Importance Missing policing in the 'startfcconsistgrp' command for volumes using volume group snapshots, resulting in node warmstarts when creating a new volume group snapshot (show details) 8.6.0.1 FlashCopy
SVAPAR-99537 All High Importance If a hyperswap volume copy is created in a DRP child pool, and the parent pool has FCM storage, the change volumes will be created as thin-provisioned instead of compressed (show details) 8.6.0.1 Data Reduction Pools
SVAPAR-99855 FS9500, SVC High Importance After battery firmware is upgraded on SV3 or FS9500 as part of a software upgrade, there is a small probability that the battery may remain permanently offline (show details) 8.6.0.1
SVAPAR-100172 FS9500, SVC Suggested During the enclosure component upgrade, which occurs after the cluster upgrade has committed, a system can experience spurious 'The PSU has indicated DC failure' events (error code 1126 ). The event will automatically fix itself after several seconds and there is no user action required (show details) 8.6.0.1
SVAPAR-100958 All Suggested A single FCM may incorrectly report multiple medium errors for the same LBA (show details) 8.6.0.1 RAID
SVAPAR-110059 All Suggested When using Storage Insights without a data collector, an attempt to collect a snap using Storage Insights may fail. (show details) 8.6.0.1 Support Data Collection
SVAPAR-95384 All Suggested In very rare circumstances, a timing window may cause a single node warmstart when creating a volume using policy-based replication (show details) 8.6.0.1 Policy-based Replication
SVAPAR-97502 All Suggested Configurations that use Policy-based Replication with standard pool change volumes will raise space usage warnings (show details) 8.6.0.1 Policy-based Replication
SVAPAR-98128 All Suggested A single node warmstart may occur on upgrade to 8.6.0.0, on SA2 nodes with 25Gb ethernet adapters (show details) 8.6.0.1 System Update
SVAPAR-98611 All Suggested The system returns an incorrect retry delay timer for a SCSI BUSY status response to AIX hosts when an attempt is made to access a VDisk that is not mapped to the host (show details) 8.6.0.1 Interoperability
HU02475 All HIPER Power outage can cause reboots on nodes with 25Gb ethernet adapters, necessitating T3 recovery (show details) 8.6.0.0 Reliability Availability Serviceability
HU02572 All HIPER When controllers running specified code levels with SAS storage are power cycled or rebooted, there is a chance that 56 bytes of data will be incorrectly restored into the cache, leading to undetected data corruption. The system will attempt to flush the cache before an upgrade, so this defect is less likely during an upgrade. (show details) 8.6.0.0 Drives
SVAPAR-90459 All HIPER Possible undetected data corruption or multiple node warmstarts if a Traditional FlashCopy Clone of a volume is created before adding Volume Group Snapshots to the volume (show details) 8.6.0.0 FlashCopy
SVAPAR-98567 FS5000 HIPER In FS50xx nodes, the TPM may become unresponsive after a number of weeks' runtime. This can lead to encryption or mdisk group CLI commands failing, or in some cases node warmstarts. This issue was partially addressed by SVAPAR-83290, but is fully resolved by this second fix. (show details) 8.6.0.0 Encryption
HU02420 All Critical During an array copyback it is possible for a memory leak to result in the progress stalling and an node warmstart of all nodes resulting in a temporary loss of access (show details) 8.6.0.0 RAID
HU02441 & HU02486 All Critical Safeguarded Copy with DRP can cause node warmstarts and mdisk timeouts (show details) 8.6.0.0 Data Reduction Pools, Safeguarded Copy & Safeguarded Snapshots
HU02471 All Critical After starting a FlashCopy map with -restore in a graph with a GMCV secondary disk that was stopped with -access there can be a data integrity issue (show details) 8.6.0.0 FlashCopy, Global Mirror With Change Volumes
HU02502 All Critical On upgrade to v8.4.2 or later with FlashCopy active, a node warmstart can occur, leading to a loss of access (show details) 8.6.0.0 FlashCopy
HU02506 All Critical On a system where NPIV is disabled or in transitional mode, certain hosts may fail to log in after a node warmstart or reboot (for example during an upgrade), leading to loss of access. (show details) 8.6.0.0 Hosts
HU02519 & HU02520 All Critical Safeguarded copy source vdisks go offline when its mappings and target vdisks are deleted then recreated in rapid succession (show details) 8.6.0.0 FlashCopy, Safeguarded Copy & Safeguarded Snapshots
HU02540 All Critical Deleting a HyperSwap volume copy with dependent Flashcopy mappings can trigger repeated node warmstarts (show details) 8.6.0.0 FlashCopy, HyperSwap
HU02541 All Critical In some circumstances, the deduplication replay process on a data reduction pool can become stuck. During this process, IO to the pool is quiesced and must wait for the replay to complete. Because it does not complete, IO to the entire storage pool hangs, which can eventually lead to a loss of access to data. (show details) 8.6.0.0 Data Reduction Pools, Deduplication
HU02546 FS5200, FS7200, FS7300, FS9100, FS9200, FS9500, SVC Critical On systems running 8.5.2.1, and with Policy-based replication configured, if you created more than 1PB of replicated volumes then this can lead to a loss of hardened data (show details) 8.6.0.0 Policy-based Replication
HU02551 All Critical When creating multiple volumes with a high mirroring sync rate, an node warmstart maybe triggered due to internal resource constraints (show details) 8.6.0.0 Global Mirror, Global Mirror With Change Volumes, Metro Mirror
HU02556 FS9500, SVC Critical In rare circumstances, a FlashSystem 9500 (or SV3) node might be unable to boot, requiring a replacement of the boot drive and TPM (show details) 8.6.0.0 Encryption
HU02561 All Critical If there are a high number of FC mappings sharing the same target, the internal array that is used to track the FC mapping is mishandled, thereby causing it to overrun. This will cause a cluster wide warmstart to occur (show details) 8.6.0.0 FlashCopy
HU02563 All Critical Improve dimm slot identification for memory errors (show details) 8.6.0.0 Reliability Availability Serviceability
HU02567 All Critical Due to a low probability timing window, FlashCopy reads can occur indefinitely to an offline Vdisk. This can cause host write delays to flashcopy target volumes that can exceed 6 minutes (show details) 8.6.0.0 FlashCopy
HU02584 All Critical If a HyperSwap volume is created with cache disabled in a Data Reduction Pool (DRP), multiple node warmstarts may occur. (show details) 8.6.0.0 Data Reduction Pools, HyperSwap
HU02586 All Critical When deleting a safeguarded copy volume which is related to a restore operation and another related volume is offline, the system may warmstart repeatedly (show details) 8.6.0.0 Safeguarded Copy & Safeguarded Snapshots
IT41088 FS5000, FS5100, FS5200 Critical Systems with low memory that have a large number of RAID arrays that are resyncing can cause a system to run out of RAID rebuild control blocks (show details) 8.6.0.0 RAID
IT41173 FS5200 Critical If the temperature sensor in an FS5200 system fails in a particular way, it is possible for drives to be powered off, causing a loss of access to data. This type of temperature sensor failure is very rare. (show details) 8.6.0.0 Reliability Availability Serviceability
SVAPAR-84116 All Critical The background delete processing for deduplicated volumes might not operate correctly if the preferred node for a deduplicated volume is changed while a delete is in progress. This can result in data loss which will be detected by the cluster when the data is next accessed (show details) 8.6.0.0 Data Reduction Pools, Deduplication
SVAPAR-86477 All Critical In some situations ordered processes need to be replayed to ensure the continued management of user workloads. Circumstances exist where this processing can fail to get scheduled so the work remains locked. Software timers that check for this continued activity will detect a stall and force a recovery warmstart (show details) 8.6.0.0 Data Reduction Pools
SVAPAR-87729 All Critical After a system has logged '3201 : Unable to send to the cloud callhome servers', the system may end up with an inconsistency in the Event Log. This inconsistency can cause a number of symptoms, including node warmstarts (show details) 8.6.0.0 Call Home
SVAPAR-87846 All Critical Node warmstarts with unusual workload pattern on volumes with Policy-based replication (show details) 8.6.0.0 Policy-based Replication
SVAPAR-88279 All Critical A low probability timing window exists in the Fibre Channel login management code. If there are many logins, and two nodes go offline in a very short time, this may cause other nodes in the cluster to warmstart (show details) 8.6.0.0 Reliability Availability Serviceability
SVAPAR-88887 FS9100, FS9200, FS9500 Critical Loss of access to data after replacing all boot drives in system (show details) 8.6.0.0 Drives, Reliability Availability Serviceability
SVAPAR-89172 All Critical Snapshot volumes created by running the 'addsnapshot' command from the CLI can be slow to come online, this causes the Production volumes to incorrectly go offline (show details) 8.6.0.0 FlashCopy, Safeguarded Copy & Safeguarded Snapshots
SVAPAR-89692 FS9500, SVC Critical Battery back-up units may reach end of life prematurely on FS9500 / SV3 systems, despite the batteries being in good physical health, which will result in node errors and potentially nodes going offline if both batteries are affected (show details) 8.6.0.0
SVAPAR-89764 All Critical There is an issue with the asynchronous delete behavior of the Safeguarded Copies VDisks in the background that can cause an unexpected internal state in the FlashCopy component that can cause a single node assert (show details) 8.6.0.0 Safeguarded Copy & Safeguarded Snapshots
SVAPAR-90438 All Critical A conflict of host IO on one node, with array resynchronisation task on the partner node, can result in some regions of parity inconsistency. This is due to the asynchronous parity update behaviour leaving invalid parity in the RAID internal cache (show details) 8.6.0.0 Distributed RAID
SVAPAR-91111 All Critical USB devices connected to an FS5035 node may be formatted on upgrade to 8.5.3 software (show details) 8.6.0.0 Encryption
SVAPAR-91860 All Critical If an upgrade is started with the pause flag and then aborted, the pause flag may not be cleared. This can trigger the system to encounter an unexpected code path on the next upgrade, thereby causing a loss of access to data (show details) 8.6.0.0 System Update
SVAPAR-92579 All Critical If Volume Group Snapshots are in use on a Policy-Based Replication DR system, a timing window may result in a node warmstart for one or both nodes in the I/O group (show details) 8.6.0.0 Policy-based Replication, Safeguarded Copy & Safeguarded Snapshots
SVAPAR-94956 FS5200, FS7200, FS7300, FS9100, FS9200, FS9500, SVC Critical When ISER clustering is configured with a default gateway of 0.0.0.0, the node IPs will not be activated during boot after a reboot or warmstart and the node will remain offline in 550/551 state (show details) 8.6.0.0 HyperSwap
SVAPAR-95349 All Critical Adding a hyperswap volume copy to a clone of a Volume Group Snapshot may cause all nodes to warmstart, causing a loss of access (show details) 8.6.0.0 HyperSwap
HU01782 All High Importance A node warmstart may occur due to a potentially bad SAS hardware component on the system such as a SAS cable, SAS expander or SAS HIC (show details) 8.6.0.0 Drives
HU02271 & SVAPAR-88275 All High Importance A single-node warmstart may occur due to a very low-probability timing window in the thin-provisioning component. This can occur when the partner node has just gone offline, causing a loss of access to data (show details) 8.6.0.0 Thin Provisioning
HU02339 All High Importance Multiple node warmstarts can occur if a system has direct Fibre Channel connections to an IBM i host, causing loss of access to data (show details) 8.6.0.0 Hosts, Interoperability
HU02464 All High Importance An issue in the processing of NVMe host logouts can cause multiple node warmstarts (show details) 8.6.0.0 Hosts, NVMe
HU02483 All High Importance T2 Recovery occurred after mkrcrelationship command was run (show details) 8.6.0.0 Command Line Interface, Global Mirror, Global Mirror With Change Volumes
HU02488 All High Importance Remote Copy partnerships disconnect every 15 minutes with error 987301 (Connection to a configured remote cluster has been lost) (show details) 8.6.0.0 Global Mirror, Global Mirror With Change Volumes, Metro Mirror
HU02490 FS9500 High Importance Upon first boot, or subsequent boots of a FS9500 a 1034 error may appear in the event log that states that the CPU PCIe link is degraded (show details) 8.6.0.0 Reliability Availability Serviceability
HU02492 SVC High Importance Configuration backup can fail after upgrade to v8.5. This only occurs on a very small number of systems that have a particular internal cluster state. If a system is running v8.5 and does not have an informational eventlog entry with error ID 988100 (CRON job failed), then it is not affected. (show details) 8.6.0.0 Reliability Availability Serviceability
HU02497 All High Importance A system with direct Fibre Channel connections to a host, or to another Spectrum Virtualize system, might experience multiple node warmstarts (show details) 8.6.0.0 Hosts, Interoperability
HU02507 All High Importance A timing window exists in the code that handles host aborts for an ATS (Atomic Test and Set) command, if the host is NVMe-attached. This can cause repeated node warmstarts. (show details) 8.6.0.0 Host Cluster, Hosts
HU02511 All High Importance Code version 8.5.0 includes a change in the driver setting for the 25Gb ethernet adapter. This change can cause port errors, which in turn can cause iSCSI path loss symptoms (show details) 8.6.0.0 Host Cluster, Hosts, SCSI Unmap, iSCSI
HU02512 FS5000 High Importance An FS5000 system with a Fibre Channel direct-attached host can experience multiple node warmstarts (show details) 8.6.0.0 Hosts
HU02523 All High Importance False Host WWPN state shows as degraded for direct attached host after upgrading to 8.5.0.2 (show details) 8.6.0.0 Host Cluster, Hosts, System Update
HU02525 FS5000, FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500, SVC High Importance Code versions 8.4.2.x, 8.5.0.0 - 8.5.0.5 and 8.5.1.0 permitted the use of an iSCSI prefix of 0. However, during an upgrade to 8.5.x, this can prevent all iSCSI hosts from re-establishing iSCSI sessions, thereby causing access loss (show details) 8.6.0.0 Hosts, iSCSI
HU02529 All High Importance A single node warmstart may occur due to a rare timing window, when a disconnection occurs between two systems in an IP replication partnership (show details) 8.6.0.0
HU02530 All High Importance Upgrades from 8.4.2 or 8.5 fail to start on some platforms (show details) 8.6.0.0 System Update
HU02534 All High Importance When upgrading from 7.8.1.5 to 8.5.0.4, PowerHA stops working due to SSH configuration changes (show details) 8.6.0.0 Reliability Availability Serviceability
HU02538 All High Importance Some systems may suffer a thread locking issue caused by the background copy / cleaning progress for flash copy maps (show details) 8.6.0.0 FlashCopy
HU02539 All High Importance If an IP address is moved to a different port on a node, the old routing table entries do not get refreshed. Therefore, the IP address maybe inaccessible through the new port (show details) 8.6.0.0
HU02545 All High Importance When following the 'removing and replacing a faulty node canister' procedure, the satask chbootdrive -replacecanister fails to clear the reported 545 error - instead the replacement reboots into 525 / 522 service state (show details) 8.6.0.0 Drives, Reliability Availability Serviceability
HU02549 All High Importance When upgrading from a lower level, to 8.5 or higher for the first time, an unexpected node warmstart may occur that can lead to a stalled upgrade (show details) 8.6.0.0 System Update
HU02555 All High Importance A node may warmstart if the system is configured for remote authorization, but no remote authorization service, such as LDAP, has been configured (show details) 8.6.0.0 LDAP
HU02558 FS5000, FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500, SVC High Importance A timing window exists if a node encounters repeated timeouts on I/O compression requests. This can cause two threads to conflict with each other, thereby causing a deadlock condition to occur. (show details) 8.6.0.0 Compression
HU02562 All High Importance A node can warmstart when a 32 Gb Fibre Channel adapter receives an unexpected asynchronous event via internal mailbox commands. This is a transient failure caused during DMA operations (show details) 8.6.0.0
HU02565 All High Importance Node warmstart when generating data compression savings data for 'lsvdiskanalysis' (show details) 8.6.0.0
HU02569 All High Importance Due to a low-probability timing window, when processing I/O from both SCSI and NVMe hosts, a node may warmstart to clear the condition (show details) 8.6.0.0 Host Cluster, Hosts, NVMe, SCSI Unmap, iSCSI
HU02573 All High Importance HBA firmware can cause a port to appear to be flapping. The port will not work again until the HBA is restarted by rebooting the node. (show details) 8.6.0.0 Fibre Channel, Hosts
HU02580 All High Importance If FlashCopy mappings are force stopped, and the targets are in a remote copy relationship, then a node may warmstart (show details) 8.6.0.0 FlashCopy
HU02581 All High Importance Due to a low probability timing window, a node warmstart might occur when I/O is sent to a partner node and before the partner node recognizes that the disk is online (show details) 8.6.0.0 Cache
HU02583 All High Importance FCM drive ports maybe excluded after a failed drive firmware download. Depending on the number of drives impacted, this may take the RAID array offline (show details) 8.6.0.0 Drives
HU02589 FS5200, FS7200, FS9100, FS9200, FS9500 High Importance Reducing the expiration date of snapshots can cause volume creation and deletion to stall (show details) 8.6.0.0 FlashCopy, Policy-based Replication, Safeguarded Copy & Safeguarded Snapshots
IT41191 All High Importance If a REST API client authenticates as an LDAP user, a node warmstart can occur (show details) 8.6.0.0 REST API
IT41835 All High Importance A T2 recovery may occur when a failed drive in the system is replaced with an unsupported drive type (show details) 8.6.0.0 Drives
SVAPAR-82950 FS9500, SVC High Importance If a FlashSystem 9500 or SV3 node had a USB Flash Drive present at boot, upgrading to either 8.5.0.7 or 8.5.3.0 may cause the node to become unresponsive. Systems already running 8.5.0.7 or 8.5.3.0 are not affected by this issue (show details) 8.6.0.0 Reliability Availability Serviceability
SVAPAR-83290 FS5000 High Importance An issue with the Trusted Platform Module (TPM) in FlashSystem 50xx nodes may cause the TPM to become unresponsive. This can happen after a number of weeks of continuous runtime. (show details) 8.6.0.0
SVAPAR-84305 All High Importance A node may warmstart when attempting to run 'chsnmpserver -community' command without any additional parameter (show details) 8.6.0.0 System Monitoring
SVAPAR-84331 All High Importance A node may warmstart when the 'lsnvmefabric -remotenqn' command is run (show details) 8.6.0.0 NVMe
SVAPAR-85093 All High Importance Systems that are using Policy-Based Replication may experience node warmstarts, if host I/O consists of large write I/Os with a high queue depth (show details) 8.6.0.0 Policy-based Replication
SVAPAR-85396 FS5000, FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500 High Importance Replacement Samsung NVME drives may show as unsupported, or they may fail during a firmware upgrade as unsupported, due to a VPD read problem (show details) 8.6.0.0 Drives
SVAPAR-86035 All High Importance Whilst completing a request, a DRP pool attempts to allocate additional metadata space, but there is no free space available. This causes the node to warmstart (show details) 8.6.0.0 Data Reduction Pools
SVAPAR-89780 All High Importance A node may warmstart after running the flashcopy command 'stopfcconsistgrp' due to the flashcopy maps in the consistency group being in an invalid state (show details) 8.6.0.0 FlashCopy
SVAPAR-89951 All High Importance A single node warmstart might occur when a volume group with a replication policy switches the replication to cycling mode. (show details) 8.6.0.0 Policy-based Replication
SVAPAR-90395 FS9500, SVC High Importance FS9500 and SV3 might suffer from poor Remote Copy performance due to a lack of internal messaging resources (show details) 8.6.0.0 Global Mirror, Global Mirror With Change Volumes, HyperSwap, Metro Mirror
SVAPAR-92066 All High Importance Node warmstarts can occur after running the 'lsvdiskfcmapcopies' command if Safeguarded Copy is used (show details) 8.6.0.0 Safeguarded Copy & Safeguarded Snapshots
SVAPAR-92983 All High Importance There is an issue that prevents Remote users with SSH key to connect to the storage system if BatchMode is enabled (show details) 8.6.0.0 Security
SVAPAR-93054 All High Importance Backend systems on 8.2.1 and beyond have an issue that causes capacity information updates to stop after a T2 or T3 is performed. This affects all backend systems with FCM arrays (show details) 8.6.0.0 Backend Storage
SVAPAR-93309 All High Importance A node may briefly go offline after a battery firmware update (show details) 8.6.0.0 System Update
SVAPAR-94686 All High Importance The GUI can become slow and unresponsive due to a steady stream of configuration updates such as 'svcinfo' queries for the latest configuration data (show details) 8.6.0.0 Graphical User Interface
SVAPAR-99273 All High Importance If a SAN switch's Fabric Controller issues an abort (ABTS) command, and then issues an RSCN command before the abort has completed, this unexpected switch behaviour can trigger a single-node warmstart. (show details) 8.6.0.0
HU02446 All Suggested An invalid alert relating to GMCV freeze time can be displayed (show details) 8.6.0.0 Global Mirror With Change Volumes
HU02453 All Suggested It may not be possible to connect to GUI or CLI without a restart of the Tomcat server (show details) 8.6.0.0 Command Line Interface, Graphical User Interface
HU02462 All Suggested A node can warm start when a FlashCopy volume is flushing, quiesces and has pinned data (show details) 8.6.0.0 FlashCopy
HU02463 All Suggested LDAP user accounts can become locked out because of multiple failed login attempts (show details) 8.6.0.0 Graphical User Interface, LDAP
HU02468 All Suggested lsvdisk preferred_node_id filter not working correctly (show details) 8.6.0.0 Command Line Interface
HU02484 All Suggested The GUI does not allow expansion of DRP thin or compressed volumes (show details) 8.6.0.0 Data Reduction Pools, Graphical User Interface
HU02487 All Suggested Problems expanding the size of a volume using the GUI (show details) 8.6.0.0 Graphical User Interface
HU02491 All Suggested On upgrade from v8.3.x, v8.4.0 or v8.4.1 to v8.5, if the system has Global Mirror with Change Volumes relationships, a single node warmstart can occur (show details) 8.6.0.0 Global Mirror With Change Volumes
HU02494 All Suggested A system with a DNS server configured, which cannot ping the server, will log information events in the eventlog. In some environments the firewall blocks ping packets but allows DNS lookup, so this APAR disables these events. (show details) 8.6.0.0 Reliability Availability Serviceability
HU02498 All Suggested If a host object with no ports exists on upgrade to v8.5, the GUI volume mapping panel may fail to load. (show details) 8.6.0.0 Graphical User Interface
HU02501 All Suggested If an internal I/O timeout occurs in a RAID array, a node warmstart can occur (show details) 8.6.0.0 RAID
HU02503 All Suggested The Date / Time panel can fail to load in the GUI when a timezone set via the CLI is not supported by the GUI (show details) 8.6.0.0 Graphical User Interface
HU02504 All Suggested The Date / Time panel can display an incorrect timezone and default to manual time setting rather than NTP (show details) 8.6.0.0 Graphical User Interface
HU02505 All Suggested A single node warmstart can occur on v8.5 systems running DRP, due to a low-probability timing window during normal running (show details) 8.6.0.0 Data Reduction Pools
HU02508 All Suggested The mkippartnership cli command does not allow a portset with a space in the name as a parameter. (show details) 8.6.0.0 Command Line Interface
HU02509 All Suggested Upgrade to v8.5 can cause a single node warmstart, if nodes previously underwent a memory upgrade while DRP was in use (show details) 8.6.0.0 Data Reduction Pools
HU02514 All Suggested Firmware upgrade may fail for certain drive types, with the error message CMMVC6567E The Apply Drive Software task cannot be initiated because no download images were found in the package file (show details) 8.6.0.0 Drives
HU02515 FS9500 Suggested Fan speed on FlashSystem 9500 can be higher than expected, if a high drive temperature is detected (show details) 8.6.0.0 Drives
HU02528 All Suggested When upgrading to 8.5.0 or higher, a situation may occur whereby a variable is not locked at the correct point, resulting in a mismatch. The system code detects this and initiates a warmstart to reset any erroneous values (show details) 8.6.0.0 Reliability Availability Serviceability
HU02544 FS5200, FS7200, FS7300, FS9100, FS9200, FS9500, SVC Suggested On systems running 8.5.2.1, if you are not logged in as superuser and you try to create a partnership for policy-based replication, or enable policy-based replication on an existing partnership, then this can trigger a single node warmstart. (show details) 8.6.0.0 Policy-based Replication
HU02553 FS9500, SVC Suggested Remote copy relationships may not correctly display the name of the vdisk on the remote cluster (show details) 8.6.0.0 Global Mirror, Global Mirror With Change Volumes, Metro Mirror
HU02559 All Suggested A GUI resource issue may cause an out-of-memory condition, leading to the CIMOM and GUI becoming unresponsive, or showing incomplete information (show details) 8.6.0.0 Graphical User Interface
HU02568 All Suggested Unable to create remote copy relationship with 'mkrcrelationship' with Aux volume ID greater than 10,000 when one of the systems in the set of partnered systems is limited to 10,000 volumes, either due to the limits of the platform (hardware) or the installed software version (show details) 8.6.0.0 Global Mirror, Global Mirror With Change Volumes, Metro Mirror
HU02579 All Suggested The GUI 'Add External iSCSI Storage' wizard does not work with portsets. The ports are shown but are not selectable (show details) 8.6.0.0 Graphical User Interface, iSCSI
HU02592 All Suggested In some scenarios DRP can request RAID to attempt a read by reconstructing data from other strips. In certain cases this can result in a node warmstart (show details) 8.6.0.0 Data Reduction Pools, RAID
HU02594 All Suggested Initiating drive firmware update via management user interface for one drive class can prompt all drives to be updated (show details) 8.6.0.0 Drives, System Update
HU02600 All Suggested Single node warmstart caused by a rare race condition triggered by multiple aborts and I/O issues (show details) 8.6.0.0
SVAPAR-84099 All Suggested An NVMe codepath exists where by strict state checking incorrectly decides that a software flag state is invalid, there by triggering a node warmstart (show details) 8.6.0.0 Hosts, NVMe
SVAPAR-85640 FS5000, FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500 Suggested If new nodes/iogroups are added to an SVC cluster that is virtualizing a clustered SpecV system, an attempt to add the SVC node host objects to a host cluster on the backend SpecV system will fail with CLI error code CMMVC8278E due to incorrect policing (show details) 8.6.0.0 Host Cluster
SVAPAR-86182 All Suggested A node may warmstart if there is an encryption key error that prevents a distributed raid array from being created (show details) 8.6.0.0 Distributed RAID, Encryption
SVAPAR-89296 All Suggested Immediately after upgrade from pre-8.4.0 to 8.4.0 or later, EasyTier may stop promoting hot data to the tier0_flash tier if it contains non-FCM storage. This issue will automatically resolve on the next upgrade (show details) 8.6.0.0 EasyTier
SVAPAR-89781 All Suggested The 'lsportstats' command does not work via the REST API until code level 8.5.4.0 (show details) 8.6.0.0
SVAPAR-93442 All Suggested User ID does not have the authority to submit a command in some LDAP environments (show details) 8.6.0.0
SVAPAR-93987 All Suggested A timeout may cause a single node warmstart, if a FlashCopy configuration change occurs while there are many I/O requests outstanding for a source volume which has multiple FlashCopy targets (show details) 8.6.0.0 FlashCopy
SVAPAR-94682 All Suggested SMTP fails if the length of the email server's domain name is longer than 40 characters (show details) 8.6.0.0
SVAPAR-94703 All Suggested The estimated compression savings value shown in the GUI for a single volume is incorrect. The total savings for all volumes in the system will be shown (show details) 8.6.0.0 Graphical User Interface
SVAPAR-94902 FS5000, FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500 Suggested When attempting to enable local port masking for a specific subset of control enclosure based clusters, this may fail with the following message; 'The specified port mask cannot be applied because insufficient paths would exist for node communication' (show details) 8.6.0.0
SVAPAR-96656 All Suggested VMware hosts may experience errors creating snapshots, due to an issue in the VASA Provider (show details) 8.6.0.0

4. Useful Links

Description Link
Product Documentation
Update Matrices, including detailed build version
Support Information pages providing links to the following information:
  • Interoperability information
  • Product documentation
  • Limitations and restrictions, including maximum configuration limits
IBM Storage Virtualize Policy-based replication and High Availability Compatibility Cross Reference
Storage Virtualize Family of Products Inter-System Metro Mirror and Global Mirror Compatibility Cross Reference
Software Upgrade Test Utility
Software Upgrade Planning