Release Note for systems built with IBM Storage Virtualize


This is the release note for the 8.7.0 release and details the issues resolved between 8.7.0.0 and 8.7.0.4. This document will be updated with additional information whenever a PTF is released.

This document was last updated on 9 April 2025.

  1. New Features
  2. Known Issues and Restrictions
  3. Issues Resolved
    1. Security Issues Resolved
    2. APARs Resolved
  4. Useful Links

Note. Detailed build version numbers are included in the Concurrent Compatibility and Code Cross Reference in the Useful Links section.


1. New Features

The following new features have been introduced in the 8.7.0.0 release: The following features were first introduced in Non-LTS release 8.6.3: The following features were first introduced in Non-LTS release 8.6.2: The following features were first introduced in Non-LTS release 8.6.1:

Automatic update of candidate Flashcore Modules

From 8.7.0.0, the software update package for FlashSystems that support NVMe FlashCore Modules (FCM) includes a drive firmware patch package. This bundled patch is installed automatically when updating the system software unless a newer version of the patch is already installed on the system.

The NVMe drive firmware in the patch is used to automatically update any candidate FlashCore Modules in the system before they are used in the array. For example, when adding additional capacity to a system by adding new drives, or when installing a replacement drive.

The bundled NVME drive firmware patch package cannot currently be used to manually update other drives in the system. To update drives that are already in use in an array, follow the standard drive update procedure to obtain the appropriate firmware package and instructions for installation.

2. Known Issues and Restrictions

Note: For clarity, the terms "node" and "canister" are used interchangeably.
Details Introduced

Upgrade to 8.7.0.4 is not currently supported on FS5015 and FS5035, due to SVAPAR-161016.

This restriction will be lifted in a future PTF.

8.7.0.4

This release includes updated battery firmware that improves both short-term and long-term battery reliability. After the firmware update is complete, there is a small chance that one or more batteries will log an error to indicate they need to be replaced. This error does not cause the battery to go offline, and it does not affect the operation of the system. Open a support ticket for battery replacement if you see this error.

8.7.0.4

Ethernet clustering is not supported if the cluster contains three or more I/O groups.

8.7.0.2

8.7.0 is the final release that supports the following Remote Copy features:

  • Metro Mirror
  • Global Mirror (and Global Mirror with Change Volumes)
  • HyperSwap
  • Volume mobility

Policy-Based Replication and Policy-Based High Availability provide replication functionality in all future LTS and non-LTS releases.

8.7.0.0

Systems using VMware Virtual Volumes (vVols) may require reconfiguration before updating to 8.6.1.0 or later.

Refer to Updating to Storage Virtualize 8.6.1 or later using VMware Virtual Volumes (vVols)

8.6.1.0

All IO groups must be configured with FC target port mode set to 'enabled' before upgrading to 8.6.1.0 or later.

Enabling the FC target port mode configures multiple WWPNs per port (using the Fibre Channel NPIV technology), and separates the host traffic from all other traffic on different WWPNs

Important: Changing this setting is likely to require changes to zoning, and rescanning LUNs in all applications.

The 8.6 product documentation Enabling NPIV on an existing system contains details about how to make the changes.

Note: For IBM i hosts, an alternative procedure is required and is documented here: IBM i Systems attached to FlashSystem or SVC must be converted to use FlashSystem NPIV before upgrading to 8.7 or higher

8.6.1.0

8.6.1.0 removes support for the CIM protocol. Applications that connect using the CIM protocol should be upgraded to use a supported interface, such as the REST interface.

IBM recommends that any product teams currently using CIM protocol comment on this Idea and IBM will contact you with more details about how to use a supported interface.

8.6.1.0

Systems using 3-site orchestrator cannot upgrade to 8.6.1.0 or later

8.6.1.0

Spectrum Virtualize for Public Cloud is not supported on 8.6.1.0 or later

8.6.1.0

There is an existing limit on the number of files that can be returned by the CLI of approximately 780 entries. In many configurations this limit is of no concern. However, due to a problem with hot-spare node I/O stats files, 8-node clusters with many hardware upgrades or multiple spare nodes may see up to 900 I/O stats files. As a consequence the data collector for Storage Insights and Spectrum Control cannot list or download the required set of performance statistics data. The result is that there are many gaps in the performance data, leading to errors with the performance monitoring tools and a lack of performance history.

The workaround is to remove the files associated with spare nodes or previously/updated hardware using the cleardumps command (or to cleardumps the entire iostats directory).

This is a known issue that will be lifted in a future PTF. The fix can be tracked using APAR HU02403.

8.4.0.0
If an update stalls or fails then contact IBM Support for further assistance n/a
The following restrictions were valid but have now been lifted

iSER hosts were not supported due to SVAPAR-148236.

This has been resolved in 8.7.0.4.

8.7.0.2

Configurations using short distance partnerships using RDMA, or Ethernet clustering, are not supported on 8.7.0.

This has been resolved in 8.7.0.2.

8.7.0.0

Due to SVAPAR-138214, if Veeam 12.1 is used with Storage Virtualize 8.5.1 or later, and the Veeam user is in an ownership group, this might cause node warmstarts.

This has been resolved in 8.7.0.1.

8.5.1.0

Each ethernet port can only have a single management IP address. Attempting to add a second management IP address to a port may cause multiple node warmstarts.

This issue has now been lifted in 8.7.0.1 under SVAPAR-136256.

8.7.0.0

Due to a known issue that occurred following a cluster outage, while a DRAID1 array was expanding, expansion of DRAID1 arrays was not supported on 8.4.0 and higher.

This restriction has now been lifted in 8.7.0.0.

8.4.0.0


3. Issues Resolved

This release contains all of the fixes included in the 8.6.0.0 release, plus the following additional fixes.

A release may contain fixes for security issues, fixes for APARs or both. Consult both tables below to understand the complete set of fixes included in the release.

3.1 Security Issues Resolved

Security issues are documented using a reference number provided by "Common Vulnerabilities and Exposures" (CVE).
CVE Identifier
Link for additional Information
Resolved in
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
CVE-2025-0159 7184182 8.7.0.3
CVE-2025-0160 7184182 8.7.0.3
CVE-2024-6387 7183471 8.7.0.3
CVE-2024-6409 7183471 8.7.0.3
CVE-2023-2975 7183471 8.7.0.3
CVE-2023-3446 7183471 8.7.0.3
CVE-2023-3817 7183471 8.7.0.3
CVE-2023-5678 7183471 8.7.0.3
CVE-2022-2795 7183474 8.7.0.3
CVE-2022-3094 7183474 8.7.0.3
CVE-2022-3736 7183474 8.7.0.3
CVE-2022-3924 7183474 8.7.0.3
CVE-2023-4408 7183474 8.7.0.3
CVE-2023-5517 7183474 8.7.0.3
CVE-2023-5679 7183474 8.7.0.3
CVE-2023-6516 7183474 8.7.0.3
CVE-2023-50387 7183474 8.7.0.3
CVE-2023-50868 7183474 8.7.0.3
CVE-2023-6240 7183475 8.7.0.3
CVE-2024-4032 7183477 8.7.0.3
CVE-2024-28182 7183481 8.7.0.3
CVE-2023-6356 7183482 8.7.0.3
CVE-2023-6535 7183482 8.7.0.3
CVE-2023-6536 7183482 8.7.0.3
CVE-2023-5178 7183482 8.7.0.3
CVE-2023-45871 7183482 8.7.0.3
CVE-2023-48795 7154643 8.7.0.0
CVE-2023-44487 7156535 8.7.0.0
CVE-2023-1667 7156535 8.7.0.0
CVE-2023-2283 7156535 8.7.0.0
CVE-2024-20952 7156536 8.7.0.0
CVE-2024-20918 7156536 8.7.0.0
CVE-2024-20921 7156536 8.7.0.0
CVE-2024-20919 7156536 8.7.0.0
CVE-2024-20926 7156536 8.7.0.0
CVE-2024-20945 7156536 8.7.0.0
CVE-2023-33850 7156536 8.7.0.0
CVE-2024-23672 7156538 8.7.0.0
CVE-2024-24549 7156538 8.7.0.0
CVE-2023-44487 7156539 8.7.0.0
CVE-2024-25710 7156539 8.7.0.0
CVE-2024-26308 7156539 8.7.0.0
CVE-2024-29025 7156484 8.7.0.0

3.2 APARs Resolved

Show details for all APARs
APAR
Affected Products
Severity
Description
Resolved in
Feature Tags
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
SVAPAR-156849 All HIPER Removing a replication policy from a partition operating in single location mode following a failover of Policy-based High Availability may leave an incorrect residual state on the partition that may impact future potential T3 recoveries (show details) 8.7.0.4 Policy-based High availability
SVAPAR-148236 All Critical iSER hosts are unable to access volumes on systems running 8.7.0.2 (show details) 8.7.0.4 Hosts
SVAPAR-155568 FS9500, SVC Critical On FS9500 or SV3 systems, batteries may prematurely hit end of life and go offline. (show details) 8.7.0.4 Reliability Availability Serviceability
SVAPAR-157007 All Critical On heavily loaded systems, a dual node warmstart may occur after an upgrade to 8.7.3.0, 8.7.0.3, or 8.6.0.6 due to an internal memory allocation issue causing brief loss of access to the data. (show details) 8.7.0.4 System Update
SVAPAR-157593 All Critical Mapping an HA volume to a SAN Volume Controller or FlashSystem is not supported. This may cause loss of access to data on the system presenting the HA volume. (show details) 8.7.0.4 Hosts, HyperSwap, Policy-based High availability, Storage Virtualisation
SVAPAR-138832 All High Importance Nodes using IP replication with compression may experience multiple node warmstarts due to a timing window in error recovery. (show details) 8.7.0.4 IP Replication
SVAPAR-146576 All High Importance When a quorum device is disconnected from a system twice in a short period, multiple node warmstarts can occur. 8.7.0.4 Policy-based High availability
SVAPAR-152379 All High Importance When using policy-based high availability with multiple partitions, where partitions are replicating in both directions, it is possible to see a single node warmstart on each system. This is due to a deadlock condition related to I/O forwarding, triggered by a large write or unmap spike at the non-preferred site for the partition. (show details) 8.7.0.4 Policy-based High availability
SVAPAR-156345 All High Importance A node warmstart on a system using policy-based replication has a low probability of causing another node to also warmstart. (show details) 8.7.0.4 Policy-based Replication
SVAPAR-156522 All High Importance Compare and Write (CAW) I/O requests might be rejected with a SCSI Busy status after a failure to create volumes in the second location of a PBHA partition (show details) 8.7.0.4 Policy-based High availability
SVAPAR-156660 All High Importance Compare and Write (CAW) I/O requests sent to volumes configured with policy-based high availability may get stuck in a timing window after a node comes online, causing a single-node warmstart. (show details) 8.7.0.4 Policy-based High availability
SVAPAR-143621 All Suggested REST API returns HTTP status 502 after a timeout of 30 seconds instead of 180 seconds (show details) 8.7.0.4 REST API
SVAPAR-155824 FS5200 Suggested FS5200 and FS5300 systems with iWARP adapters and 8.7.0.3 or 8.7.3.0 software may experience an out-of-memory condition. (show details) 8.7.0.4 Reliability Availability Serviceability
SVAPAR-156586 All Suggested Cloud callhome stops working after downloading software directly to the system, or upgrading to 8.7.0.2 or later. (show details) 8.7.0.4 Call Home
SVAPAR-134589 FS9500 HIPER A problem with NVMe drives on FlashSystem 9500 may impact node to node communication over the PCIe bus. This may lead to a temporary array offline. (show details) 8.7.0.3 Drives
SVAPAR-140781 All Critical Successful login attempts to the configuration node via SSH are not communicated to the remote syslog server. Service assistant and GUI logins are correctly reported. (show details) 8.7.0.3 Security
SVAPAR-142287 All Critical Loss of access to data when running certain snapshot commands at the exact time that a Volume Group Snapshots is stopping (show details) 8.7.0.3 Snapshots
SVAPAR-143890 All Critical If a HyperSwap volume is expanded shortly after disabling 3-site replication, the expandvolume command may fail to complete. This will lead to a loss of configuration access. (show details) 8.7.0.3 3-Site using HyperSwap or Metro Mirror, FlashCopy, HyperSwap
SVAPAR-147646 FS5000, FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500, SVC Critical Node goes offline when a non-fatal PCIe error on the fibre channel adapter is encountered. It's possible for this to occur on both nodes simultaneously. (show details) 8.7.0.3 Fibre Channel
SVAPAR-147870 All Critical Occasionally, deleting a thin-clone volume that is deduplicated may result in a single node warmstart and a 1340 event, causing a pool to temporarily go offline. (show details) 8.7.0.3 Data Reduction Pools, Deduplication, Snapshots
SVAPAR-147906 SVC Critical All nodes may warmstart in a SAN Volume Controller cluster consisting of SV3 nodes under heavy load, if a reset occurs on a Fibre Channel adapter used for local node to node communication. (show details) 8.7.0.3 Inter-node messaging
SVAPAR-147978 All Critical A system running 8.7.0 and using policy-based replication may experience additional warmstarts during the recovery from a single node warmstart (show details) 8.7.0.3 Policy-based Replication
SVAPAR-148504 All Critical On a system using asynchronous policy-based replication, a timing window during volume creation may cause node warmstarts on the recovery system. (show details) 8.7.0.3 Policy-based Replication
SVAPAR-149983 All Critical During an upgrade from 8.5.0.10 or higher to 8.6.0 or higher, a medium error on a quorum disk may cause a node warmstart. If the partner node is offline at the same time, this may cause loss of access. (show details) 8.7.0.3 System Update
SVAPAR-151639 All Critical If Two-Person Integrity is in use, multiple node warmstarts may occur when removing a user with remote authentication and an SSH key. (show details) 8.7.0.3 LDAP
SVAPAR-131999 All High Importance Single node warmstart when an NVMe host disconnects from the storage (show details) 8.7.0.3 NVMe
SVAPAR-136677 All High Importance An unresponsive DNS server may cause a single node warmstart and the email process to get stuck. (show details) 8.7.0.3 System Monitoring
SVAPAR-142081 All High Importance If an error occurs during creation of a replication policy, multiple node warmstarts may occur, causing a temporary loss of access to data. (show details) 8.7.0.3 Policy-based Replication
SVAPAR-144000 All High Importance A high number of abort commands from an NVMe host in a short time may cause a Fibre Channel port on the storage to go offline, leading to degraded hosts. (show details) 8.7.0.3 Hosts
SVAPAR-144036 FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500 High Importance Replacement of an industry standard NVMe drive may fail until both nodes are warmstarted. (show details) 8.7.0.3 Reliability Availability Serviceability
SVAPAR-144070 All High Importance After changing the system name, the iSCSI IQNs may still contain the old system name. (show details) 8.7.0.3 iSCSI
SVAPAR-144272 All High Importance IO processing unnecessarily stalled for several seconds following a node coming online (show details) 8.7.0.3 Performance
SVAPAR-146591 All High Importance Single node asserts may occur in systems using policy-based high availability if the active quorum application is restarted. (show details) 8.7.0.3 Policy-based High availability
SVAPAR-147361 All High Importance If a software upgrade completes at the same time as performance data is being sent to IBM Storage Insights, a single node warmstart may occur. (show details) 8.7.0.3 Call Home, System Monitoring
SVAPAR-148032 All High Importance When using policy-based high availability, a specific, unusual sequence of configuration commands can cause a node warmstart and prevent configuration commands from completing. (show details) 8.7.0.3 Policy-based High availability
SVAPAR-148466 All High Importance During upgrade from 8.6.1 or earlier, to 8.6.2 or later, the lsvdiskfcmappings command can result in a node warmstart if ownership groups are in use. This may result in an outage if the partner node is offline. (show details) 8.7.0.3 System Update
SVAPAR-148495 All High Importance Multiple node warm-starts on systems running v8.7 due to a small timing window when starting a snapshot if the source volume does not have any other snapshots. This issue is more likely to occur on systems using policy-based replication with no user-created snapshots. (show details) 8.7.0.3 Policy-based Replication, Snapshots
SVAPAR-150832 All High Importance Upgrading a FlashSystem 5200 to a FlashSystem 5300 may fail if policy-based replication is enabled. The first node to upgrade may assert repeatedly, preventing a concurrent upgrade. (show details) 8.7.0.3 Reliability Availability Serviceability
SVAPAR-152019 All High Importance A single node assert may occur, potentially leading to the loss of the config node, when running the rmfcmap command with the force flag enabled. This can happen if a vdisk used by both FlashCopy and Remote Copy was previously moved between I/O groups. (show details) 8.7.0.3 FlashCopy
SVAPAR-153236 FS5000, FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500, SVC High Importance Upgrade from 8.5.x or 8.6.x to 8.7.0 may cause a single node warmstart on systems with USB encryption enabled. This can cause the upgrade to stall and require manual intervention to complete the upgrade - however during the warmstart the partner handles I/O, so there is no loss of access. (show details) 8.7.0.3 Encryption
SVAPAR-153246 All High Importance A hung condition in Remote Receive IOs (RRI) for volume groups can lead to warmstarts on multiple nodes. (show details) 8.7.0.3 Policy-based Replication
SVAPAR-153269 All High Importance A node warmstart may occur due to a stalled FlashCopy mapping, when policy-based replication is used with FlashCopy or snapshots (show details) 8.7.0.3 FlashCopy
SVAPAR-156332 All High Importance Using the GUI to create a clone or thin-clone from a snapshot may fail with an CMMVC1243E error, if the snapshot is in an HA partition. (show details) 8.7.0.3 Policy-based High availability, Snapshots
SVAPAR-123614 SVC Suggested 1300 Error in the error log when a node comes online, caused by a delay between bringing up the physical FC ports and the virtual FC ports (show details) 8.7.0.3 Hot Spare Node
SVAPAR-139118 All Suggested When logged into GUI as a user that is a member of the FlashCopy Administrator group, the GUI does not allow flashcopies to be created and options are greyed out. (show details) 8.7.0.3 Graphical User Interface
SVAPAR-139943 All Suggested A single node warmstart may occur when a host sends a high number of unexpected Fibre Channel frames. (show details) 8.7.0.3 Fibre Channel
SVAPAR-140588 All Suggested A node warmstart may occur due to incorrect processing of NVMe host I/O offload commands (show details) 8.7.0.3 NVMe
SVAPAR-140892 FS7300, FS9200 Suggested Excessive numbers of informational battery reconditioning events may be logged. (show details) 8.7.0.3 Reliability Availability Serviceability
SVAPAR-142193 All Suggested If an IP Replication partnership only has link2 configured, then the GUI Partnership shows type Fibre Channel for the IPV4 connection. (show details) 8.7.0.3 IP Replication
SVAPAR-142194 All Suggested GUI volume creation does not honour the preferred node that was selected. (show details) 8.7.0.3 Graphical User Interface
SVAPAR-144271 SVC Suggested An offline node that is protected by a spare node may take longer than expected to come online. This may result in a temporary loss of Fibre Channel connectivity to the hosts (show details) 8.7.0.3 Hot Spare Node
SVAPAR-145976 FS7300 Suggested On FlashSystem 7300, fan speeds can vary at 3 second intervals even at a constant temperature 8.7.0.3 Reliability Availability Serviceability
SVAPAR-146064 All Suggested Systems using asynchronous policy-based replication may incorrectly log events indicating the recovery point objective (RPO) has been exceeded. (show details) 8.7.0.3 Policy-based Replication
SVAPAR-148643 All Suggested Changing the management IP address on 8.7.0 software does not update the console_IP field in lspartnership and lssystem output. This can cause the management GUI and Storage Insights to display the wrong IP address. (show details) 8.7.0.3 Graphical User Interface
SVAPAR-151101 All Suggested Unable to create new volumes in a volume group with a Policy-based High Availability replication policy using the GUI. The error returned is "The selected volume group is a recovery copy and no new volumes can be created in the group." (show details) 8.7.0.3 Policy-based High availability
SVAPAR-151965 All Suggested The time zone in performance XML files is displayed incorrectly for some timezones during daylight savings time. This can impact performance monitoring tools such as Storage Insights. (show details) 8.7.0.3 System Monitoring
SVAPAR-152076 All Suggested The GUI may notify the user about all new releases, even if the system is configured to notify only for Long-Term Support releases. (show details) 8.7.0.3 Graphical User Interface
SVAPAR-152880 All Suggested The service IP may not be available after a node reboot, if a timeout occurs when the system tries to bring up the IP address. (show details) 8.7.0.3 No Specific Feature
SVAPAR-152912 All Suggested The finderr CLI command may produce no output on 8.7.0, 8.7.1 or 8.7.2. (show details) 8.7.0.3 Command Line Interface
SVAPAR-156179 All Suggested The supported length of client secret for SSO and MFA configurations is limited to 64 characters. (show details) 8.7.0.3 Security
SVAPAR-159795 All Critical On systems using iSER clustering, an issue in the iSER driver could cause simultaneous node warmstarts followed by kernel panics, due to a timing window during disconnect/reconnect. (show details) 8.7.0.2 iSCSI
SVAPAR-140926 All High Importance When a cluster partnership is removed, a timing window can result in an I/O timeout and a node warmstart. (show details) 8.7.0.2 Policy-based Replication
SVAPAR-145355 All High Importance On FS5045 with policy-based high availability or replication, an out-of-memory issue may cause frequent 4110 events. (show details) 8.7.0.2 Policy-based High availability, Policy-based Replication
SVAPAR-146522 All High Importance FlashCopy background copy and cleaning may get stuck after a node restarts. This can also affect Global Mirror with Change Volumes, volume group snapshots, and policy-based replication (show details) 8.7.0.2 FlashCopy, Global Mirror With Change Volumes, Policy-based Replication, Snapshots
SVAPAR-148364 All High Importance Systems whose certificate uses a SHA1 signature may experience node warmstarts. LDAP authentication using servers returning an OCSP response signed with SHA1 may fail. (show details) 8.7.0.2 LDAP, Security
SVAPAR-89331 All High Importance Systems running 8.5.2 or higher using IP replication with compression may have low replication bandwidth and high latency due to an issue with the way the data is compressed. (show details) 8.7.0.2 IP Replication
SVAPAR-138286 All Suggested If a direct-attached controller has NPIV enabled, 1625 errors will incorrectly be logged, indicating a controller misconfiguration. (show details) 8.7.0.2 Backend Storage
SVAPAR-148287 All Suggested On FS9500, FS5xxx or SV3 systems running 8.7.x software, it is not possible to enable USB encryption using the GUI, because the system does not correctly report how many USB devices the key has been written to. The command-line interface is not affected by this issue. (show details) 8.7.0.2 Graphical User Interface
SVAPAR-136256 All HIPER Each ethernet port can only have a single management IP address. Attempting to add a second management IP to the same port may cause multiple node warmstarts and a loss of access to data. (show details) 8.7.0.1 Reliability Availability Serviceability
SVAPAR-140080 All HIPER Tier 2 warmstarts ending with nodes in service state while processing a long list of expired snapshots. (show details) 8.7.0.1 FlashCopy, Safeguarded Copy & Safeguarded Snapshots
SVAPAR-131228 All Critical A RAID array temporarily goes offline due to delays in fetching the encryption key when a node starts up. (show details) 8.7.0.1 Distributed RAID, Encryption, RAID
SVAPAR-135022 All Critical When using Policy Based High Availability, a storage partition can become suspended due to a disagreement in the internal quorum race state between two systems, causing a loss of access to data. (show details) 8.7.0.1 Policy-based Replication
SVAPAR-136427 All Critical When deleting multiple older snapshots versions, whilst simultaneously creating new snapshots, the system can run out of bitmap space resulting in a bad snapshot map, repeated asserts, and a loss of access. (show details) 8.7.0.1 FlashCopy
SVAPAR-137485 FS5000 Critical Reseating a FlashSystem 50xx node canister at 8.7.0.0 may cause the partner node to reboot, causing temporary loss of access to data. (show details) 8.7.0.1 Reliability Availability Serviceability
SVAPAR-140079 All Critical The internal scheduler is blocked after requesting more flashcopy bitmap memory. This will cause the creation of new snapshots and removal of expired snapshots to fail. (show details) 8.7.0.1 FlashCopy, Safeguarded Copy & Safeguarded Snapshots
SVAPAR-141098 All Critical High peak latency causing access loss after recovering from SVAPAR-140079 and SVAPAR-140080. (show details) 8.7.0.1 FlashCopy, Safeguarded Copy & Safeguarded Snapshots
SVAPAR-141112 All Critical When using policy-based high availability and volume group snapshots, it is possible for an I/O timeout condition to trigger node warmstarts. This can happen if a system is disconnected for an extended period, and is then brought back online after a large amount of host I/O to the HA volumes. (show details) 8.7.0.1 Policy-based Replication
SVAPAR-141920 All Critical Under specific scenarios, adding a snapshot to a volume group could trigger a cluster recovery causing brief loss of access to data. (show details) 8.7.0.1 FlashCopy
SVAPAR-142040 All Critical A timing window related to logging of capacity warnings may cause multiple node warmstarts on a system with low free physical capacity on an FCM array. (show details) 8.7.0.1 Reliability Availability Serviceability
SVAPAR-142045 All Critical A system which was previously running pre-8.6.0 software, and is now using policy-based high availability, may experience multiple node warmstarts when a PBHA failover is requested by the user. (show details) 8.7.0.1 Policy-based Replication
SVAPAR-143480 All Critical When using asynchronous policy based replication on low bandwidth links with snapshot clone/restore, an undetected data corruption may occur. This issue only affects 8.7.0.0. (show details) 8.7.0.1 Policy-based Replication
SVAPAR-148049 SVC Critical A config node may warmstart during the failback process of the online_spare node to the spare node after executing 'swapnode -failback' command, resulting in a loss of access. (show details) 8.7.0.1 Hot Spare Node
SVAPAR-111173 All High Importance Loss of access when two drives experience slowness at the same time (show details) 8.7.0.1 RAID
SVAPAR-137512 All High Importance A single-node warmstart may occur during a shrink operation on a thin-provisioned volume. This is caused by a timing window in the cache component. (show details) 8.7.0.1 Cache
SVAPAR-138214 All High Importance When a volume group is assigned to an ownership group, creating a snapshot and populating a new volume group from the snapshot will cause a warmstart of the configuration node when 'lsvolumepopulation' is run. (show details) 8.7.0.1 FlashCopy
SVAPAR-139247 All High Importance Very heavy write workload to a thin-provisioned volume may cause a single-node warmstart, due to a low-probability deadlock condition. (show details) 8.7.0.1 Thin Provisioning
SVAPAR-139260 All High Importance Heavy write workloads to thin-provisioned volumes may result in poor performance on thin-provisioned volumes, due to a lack of destage resource. (show details) 8.7.0.1 Thin Provisioning
SVAPAR-141684 All High Importance Prevent drive firmware upgrade with both '-force' and '-all' parameters, to avoid multiple drives going offline due to lack of redundancy. (show details) 8.7.0.1 Drives
SVAPAR-144068 All High Importance If a volume group snapshot is created at the same time as an existing snapshot is deleting, all nodes may warmstart, causing a loss of access to data. This can only happen if there is insufficient FlashCopy bitmap space for the new snapshot. (show details) 8.7.0.1 Snapshots
SVAPAR-135742 All Suggested A temporary network issue may cause unexpected 1585 DNS connection errors after upgrading to 8.6.0.4, 8.6.3.0 or 8.7.0.0. This is due to a shorter DNS request timeout in these PTFs. (show details) 8.7.0.1 Reliability Availability Serviceability
SVAPAR-137906 All Suggested A node warmstart may occur due to a timeout caused by FlashCopy bitmap cleaning, leading to a stalled software upgrade. (show details) 8.7.0.1 FlashCopy, System Update
SVAPAR-138418 All Suggested Snap collections triggered by Storage Insights over cloud callhome time out before they have completed (show details) 8.7.0.1 Reliability Availability Serviceability
SVAPAR-138859 FS5000, FS5100, FS5200 Suggested Collecting a Type 4 support package (Snap Type 4: Standard logs plus new statesaves) in the GUI can trigger an out of memory event causing the GUI process to be killed. (show details) 8.7.0.1 Support Data Collection
SVAPAR-139205 All Suggested A node warmstart may occur due to a race condition between Fibre Channel adapter I/O processing and a link reset. (show details) 8.7.0.1 Fibre Channel
SVAPAR-140994 All Suggested Expanding a volume via the GUI fails with CMMVC7019E because the volume size is not a multiple of 512 bytes. (show details) 8.7.0.1 Reliability Availability Serviceability
SVAPAR-141019 All Suggested The GUI crashed when a user group with roles 3SiteAdmin and remote users exist (show details) 8.7.0.1 3-Site using HyperSwap or Metro Mirror, Graphical User Interface
SVAPAR-141467 All Suggested SNMPv3 traps may not be processed properly by the SNMP server configured in the system. (show details) 8.7.0.1 System Monitoring
SVAPAR-141559 All Suggested GUI shows: 'error occurred loading table data.' in the volume view after the first login attempt to the GUI. Volumes will be visible inside the 'Volumes by Pool'. This is evoked if we create volumes with either a number between dashes, or numbers after dashes, or other characters after numbers (show details) 8.7.0.1 Graphical User Interface
SVAPAR-141876 All Suggested The GUI does not offer the option to create GM or GMCV relationships, even after remote_copy compatibility mode has been enabled. (show details) 8.7.0.1 Global Mirror, Global Mirror With Change Volumes, Graphical User Interface
SVAPAR-141937 All Suggested In a Policy-based high availability configuration, when a SCSI Compare and Write command is sent to the non-Active Management System, and communication is lost between the systems while it is being processed, a node warmstart may occur. (show details) 8.7.0.1 Policy-based Replication
SVAPAR-142093 All Suggested The 'Upload support package' option is missing from the Support Package GUI (show details) 8.7.0.1 Graphical User Interface
SVAPAR-105861 SVC HIPER A cluster recovery may occur when an attempt is made to create a mirrored snapshot with insufficient volume mirroring bitmap space in the IO group (show details) 8.7.0.0 FlashCopy, Safeguarded Copy & Safeguarded Snapshots, Volume Mirroring
SVAPAR-116592 All HIPER If a V5000E or a Flashsystem 5000 is configured with multiple compressed IP partnerships, and one or more of the partnerships is with a non V5000E or FS5000, it may repeatedly warmstart due to a lack of compression resources. (show details) 8.7.0.0 IP Replication
SVAPAR-117738 All HIPER The configuration node may go offline with node error 565, due to a full /tmp partition on the boot drive. (show details) 8.7.0.0 Reliability Availability Serviceability
SVAPAR-130438 All HIPER Upgrading a system to 8.6.2 or higher with a single portset assigned to an IP replication partnership may cause all nodes to warmstart when making a change to the partnership. (show details) 8.7.0.0 IP Replication
SVAPAR-131567 FS7300, FS9500, SVC HIPER Node goes offline and enters service state when collecting diagnostic data for 100Gb/s adapters. (show details) 8.7.0.0
SVAPAR-94179 FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500 HIPER Faulty hardware within or connected to the CPU can result in a reboot on the affected node. However it is possible for this to sometimes result in a reboot on the partner node (show details) 8.7.0.0 Reliability Availability Serviceability
HU02585 All Critical An unstable connection between the Storage Virtualize system and an external virtualized storage system can sometimes result in a cluster recovery occurring (show details) 8.7.0.0 Backend Storage
SVAPAR-100127 All Critical The Service Assistant GUI Node rescue option incorrectly performs the node rescue on the local node instead of the node selected in the GUI. (show details) 8.7.0.0 Graphical User Interface
SVAPAR-100564 All Critical On code level 8.6.0.0, multiple node warmstarts will occur if a user attempts to remove the site ID from a host that has Hyperswap volumes mapped to it. (show details) 8.7.0.0 HyperSwap
SVAPAR-100871 FS5000, FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500, SVC Critical Removing an NVMe host followed by running the 'lsnvmefabric' command causes a recurring single node warmstart (show details) 8.7.0.0 NVMe
SVAPAR-104533 All Critical Systems that encounter multiple node asserts, followed by a system T3 recovery, may experience errors repairing Data Reduction Pools (show details) 8.7.0.0 Data Reduction Pools
SVAPAR-105430 All Critical When hardware compression is suspended mid IO to a DRP compressed volume, it may cause the IO to hang until an internal timeout is hit and a node warmstarts. (show details) 8.7.0.0 Compression, Data Reduction Pools
SVAPAR-107270 All Critical If an upgrade from a level below 8.6.x, to 8.6.0 or 8.6.1 commits, whilst FlashCopy is preparing to start a map, a bad state is introduced that prevents the FlashCopy maps from starting. (show details) 8.7.0.0 Global Mirror With Change Volumes, Policy-based Replication
SVAPAR-107547 All Critical If there are more than 64 logins to a single Fibre Channel port, and a switch zoning change is made, a single node warmstart may occur. (show details) 8.7.0.0 Reliability Availability Serviceability
SVAPAR-107734 All Critical When issuing IO to an incremental fcmap volume that is in a stopped state, but has recently been expanded, and it also has a partner fcmap, may cause the nodes to go into a restart. (show details) 8.7.0.0 FlashCopy
SVAPAR-110735 All Critical Additional policing has been introduced to ensure that FlashCopy target volumes are not used with policy-based replication. Commands 'chvolumegroup -replicationpolicy' will fail if any volume in the group is the target of a FlashCopy map. 'chvdisk -volumegroup' will fail if the volume is the target of a FlashCopy map, and the volume group has a replication policy. (show details) 8.7.0.0 FlashCopy, Policy-based Replication
SVAPAR-111257 All Critical If many drive firmware upgrades are performed in quick succession, multiple nodes may go offline with node error 565 due to a full boot drive. (show details) 8.7.0.0 Drives
SVAPAR-111705 All Critical If a Volume Group Snapshot fails and the system has 'snapshotpreserveparent' set to 'yes', this may trigger multiple node warmstarts. (show details) 8.7.0.0 FlashCopy
SVAPAR-111994 All Critical Certain writes to deduplicated and compressed DRP vdisks may return a mismatch, leading to a DRP pool going offline. (show details) 8.7.0.0 Compression, Data Reduction Pools, Deduplication
SVAPAR-112007 All Critical Running the 'chsystemlimits' command with no parameters can cause multiple node warmstarts. (show details) 8.7.0.0 Command Line Interface
SVAPAR-112107 FS9500, SVC Critical There is an issue that affects PSU firmware upgrades in FS9500 and SV3 systems that can cause an outage. This happens when one PSU fails to download the firmware and another PSU starts to download the firmware. It is a very rare timing window that can be triggered if two PSUs are reseated close in time during the firmware upgrade process. (show details) 8.7.0.0 System Update
SVAPAR-112707 SVC Critical Marking error 3015 as fixed on a SVC cluster containing SV3 nodes may cause a loss of access to data. For more details refer to this Flash (show details) 8.7.0.0 Reliability Availability Serviceability
SVAPAR-112939 All Critical A loss of disk access on one pool may cause IO to hang on a different pool due to a cache messaging hang. (show details) 8.7.0.0 Cache
SVAPAR-115505 All Critical Expanding a volume in a Flashcopy map and then creating a dependent incremental forward and reverse Flashcopy map may cause a dual node warmstart when the incremental map is started. (show details) 8.7.0.0 FlashCopy
SVAPAR-120391 All Critical Removing an incremental Flashcopy mapping from a consistency group, after there was a previous error when starting the Flashcopy consistency group that caused a node warmstart, may trigger additional node asserts. (show details) 8.7.0.0 FlashCopy
SVAPAR-120397 All Critical A node may not shutdown cleanly on loss of power if it contains 25Gb Ethernet adapters, necessitating a system recovery. (show details) 8.7.0.0 Reliability Availability Serviceability
SVAPAR-120610 All Critical Loss of access to data when changing the properties of a FlashCopy Map while the map is being deleted (show details) 8.7.0.0 FlashCopy
SVAPAR-123874 All Critical There is a timing window when using async-PBR or RC GMCV, with Volume Group snapshots, which results in the new snapshot VDisk mistakenly being taken offline, forcing the production volume offline for a brief period. (show details) 8.7.0.0 Global Mirror With Change Volumes, Policy-based Replication
SVAPAR-123945 All Critical If a system SSL certificate is installed with the extension CA True it may trigger multiple node warmstarts. (show details) 8.7.0.0 Encryption
SVAPAR-125416 All Critical If the vdisk with ID 0 is deleted and then recreated, and is added to a volume group with an HA replication policy, its internal state may become invalid. If a node warmstart or upgrade occurs in this state, this may trigger multiple node warmstarts and loss of access. (show details) 8.7.0.0 Policy-based Replication
SVAPAR-126737 All Critical If a user that does not have SecurityAdmin role runs the command 'rmmdiskgrp -force' on a pool with mirrored VDisks, a T2 recovery may occur. (show details) 8.7.0.0
SVAPAR-126767 All Critical Upgrading to 8.6.0 when iSER clustering is configured, may cause multiple node warmstarts to occur, if node canisters have been swapped between slots since the system was manufactured. (show details) 8.7.0.0 iSCSI
SVAPAR-127833 All Critical Temperature warning is reported against the incorrect Secondary Expander Module (SEM) (show details) 8.7.0.0 Reliability Availability Serviceability
SVAPAR-127836 All Critical Running some Safeguarded Copy commands can cause a cluster recovery in some platforms. (show details) 8.7.0.0 Safeguarded Copy & Safeguarded Snapshots
SVAPAR-128052 All Critical A node assert may occur if a host sends a login request to a node when the host is being removed from the cluster with the '-force' parameter. (show details) 8.7.0.0 Hosts, NVMe
SVAPAR-128401 FS5000 Critical Upgrade to 8.6.3 may cause loss of access to iSCSI hosts, on FlashSystem 5015 and FlashSystem 5035 systems with a 4-port 10Gb ethernet adapter. (show details) 8.7.0.0 Reliability Availability Serviceability
SVAPAR-128626 All Critical A node may warmstart or fail to start FlashCopy maps, in volume groups that contain Remote Copy primary and secondary volumes, or both copies of a Hyperswap volume. (show details) 8.7.0.0 FlashCopy, Global Mirror, Global Mirror With Change Volumes, HyperSwap, Metro Mirror
SVAPAR-128912 All Critical A T2 recovery may occur when attempting to take a snapshot from a volume group that contains volumes from multiple I/O groups, and one of the I/O groups is offline. (show details) 8.7.0.0 FlashCopy, Safeguarded Copy & Safeguarded Snapshots
SVAPAR-128913 All Critical Multiple node asserts after a VDisk copy in a data reduction pool was removed while an IO group is offline and a T2 recovery occurred (show details) 8.7.0.0 Data Reduction Pools
SVAPAR-129298 All Critical Manage disk group went offline during queueing of fibre rings on the overflow list causing the node to assert. (show details) 8.7.0.0 RAID
SVAPAR-130553 All Critical Converting a 3-Site AuxFar volume to HyperSwap results in multiple node asserts (show details) 8.7.0.0 3-Site using HyperSwap or Metro Mirror, HyperSwap
SVAPAR-131259 All Critical Removal of the replication policy after the volume group was set to be independent exposed an issue that resulted in the FlashCopy internal state becoming incorrect, this meant subsequent FlashCopy actions failed incorrectly. (show details) 8.7.0.0 FlashCopy, Policy-based Replication
SVAPAR-131648 All Critical Multiple node warmstarts may occur when starting an incremental FlashCopy map that uses a replication target volume as its source, and the change volume is used to keep a consistent image. (show details) 8.7.0.0 FlashCopy, Policy-based Replication
SVAPAR-132027 All Critical An incorrect 'acknowledge' status for an initiator SCSI command is sent from the SCSI target side when no sense data was actually transferred. This may cause a node to warmstart. (show details) 8.7.0.0
SVAPAR-133392 All Critical In rare situations involving multiple concurrent snapshot restore operations, an undetected data corruption may occur. (show details) 8.7.0.0
SVAPAR-133442 All Critical When using asynchronous policy based replication in DR test mode, if the DR volume group is put into production use (the volume group is made independent), an undetected data corruption may occur. (show details) 8.7.0.0 Policy-based Replication
SVAPAR-98184 All Critical When a Volume Group Snapshot clone is added to a replication policy before the clone is complete, the system may repeatedly warmstart when the Policy-based Replication volume group is changed to independent access (show details) 8.7.0.0 FlashCopy, Policy-based Replication
SVAPAR-98612 All Critical Creating a volume group snapshot with an invalid I/O group value may trigger multiple node warmstarts (show details) 8.7.0.0 FlashCopy
HU02159 All High Importance A rare issue caused by unexpected I/O in the upper cache can cause a node to warmstart (show details) 8.7.0.0 Cache
SVAPAR-100162 All High Importance Some host vendors such as Windows have recently started to use 'mode select page 7'. IBM Storage does not support this mode. If the storage received this mode level it would cause a warmstart to occur (show details) 8.7.0.0 Hosts
SVAPAR-100977 All High Importance When a zone containing NVMe devices is enabled, a node warmstart might occur. (show details) 8.7.0.0 NVMe
SVAPAR-102573 All High Importance On systems using Policy-Based Replication and Volume Group Snapshots, some CPU cores may have high utilization due to an issue with the snapshot cleaning algorithm. This can impact performance for replication and host I/O (show details) 8.7.0.0 Policy-based Replication
SVAPAR-104159 All High Importance Nodes configured with 32GB or less of RAM, and specific 25Gb ethernet adapters, under some circumstances may run out of memory. This can cause a single node warmstart. (show details) 8.7.0.0 Reliability Availability Serviceability
SVAPAR-104250 All High Importance There is an issue whereby NVMe CaW (Compare and Write) commands can incorrectly go into an invalid state, thereby causing the node to assert to clear the bad condition (show details) 8.7.0.0 Hosts, NVMe
SVAPAR-105727 All High Importance An upgrade within the 8.5.0 release stream from 8.5.0.5 or below, to 8.5.0.6 or above, can cause an assert of down-level nodes during the upgrade, if volume mirroring is heavily utilised (show details) 8.7.0.0 Volume Mirroring
SVAPAR-106874 FS5200, FS7200, FS7300, FS9100, FS9200, FS9500, SVC High Importance A timing window may cause a single node warmstart, while recording debug information about a replicated host write. This can only happen on a system using Policy Based Replication. (show details) 8.7.0.0 Policy-based Replication
SVAPAR-107815 All High Importance There is an issue between 3-Site, whilst adding snapshots on the auxfar site, that causes the node to warmstart (show details) 8.7.0.0 3-Site using HyperSwap or Metro Mirror
SVAPAR-107866 & SVAPAR-110742 All High Importance A System is unable to send email to email server because the password contains a hash '#' character. (show details) 8.7.0.0
SVAPAR-108715 All High Importance The Service Assistant GUI on 8.5.0.0 and above incorrectly performs actions on the local node instead of the node selected in the GUI. (show details) 8.7.0.0 Graphical User Interface
SVAPAR-108831 FS9500, SVC High Importance FS9500 and SV3 nodes may not boot with the minimum configuration consisting of at least 2 DIMMS. (show details) 8.7.0.0 Reliability Availability Serviceability
SVAPAR-109385 All High Importance When one node has a hardware fault involving a faulty PCI switch, the partner node can repeatedly assert until it enters a 564 status resulting in an outage (show details) 8.7.0.0
SVAPAR-110426 All High Importance When a security admin other than superuser runs security patch related commands 'lspatch' and 'lssystempatches' this can cause a node to warmstart (show details) 8.7.0.0 Security
SVAPAR-110743 All High Importance Email becoming stuck in the mail queue caused a delay in the 'upgrade commit was finished' message being sent, therefore causing 3 out of 4 nodes to warmstart, and then rejoin the cluster automatically within less than three minutes. (show details) 8.7.0.0 System Update
SVAPAR-110765 All High Importance In a 3-Site configuration, the Config node can be lost if the 'stopfcmap or 'stopfcconsistgrp ' commands are run with the '-force' parameter (show details) 8.7.0.0 3-Site using HyperSwap or Metro Mirror
SVAPAR-110819 & SVAPAR-113122 All High Importance A single-node warmstart may occur when a Fibre Channel port is disconnected from one fabric, and added to another. This is caused by a timing window in the FDMI discovery process. (show details) 8.7.0.0 Fibre Channel
SVAPAR-111812 All High Importance Systems with 8.6.0 or later software may fail to complete lsvdisk commands, if a single SSH session runs multiple lsvdisk commands piped to each other. This can lead to failed login attempts for the GUI and CLI, and is is more likely to occur if the system has more than 400 volumes. (show details) 8.7.0.0 Command Line Interface
SVAPAR-111996 FS9500, SVC High Importance After upgrading to a level which contains new battery firmware, the battery may be offline after the upgrade. (show details) 8.7.0.0 Reliability Availability Serviceability, System Update
SVAPAR-112119 All High Importance Volumes can go offline due to out of space issues. This can cause the node to warmstart. (show details) 8.7.0.0
SVAPAR-112203 All High Importance A node warmstart may occur when removing a volume from a volume group which uses policy-based Replication. (show details) 8.7.0.0 Policy-based Replication
SVAPAR-112525 All High Importance A node assert can occur due to a resource allocation issue in a small timing window when using Remote Copy (show details) 8.7.0.0 Global Mirror, Global Mirror With Change Volumes, HyperSwap, Metro Mirror
SVAPAR-112856 All High Importance Conversion of Hyperswap volumes to 3 site consistency groups will increase write response time of the Hyperswap volumes. (show details) 8.7.0.0 3-Site using HyperSwap or Metro Mirror, HyperSwap
SVAPAR-115021 All High Importance Software validation checks can trigger a T2 recovery when attempting to move a Hyperswap vdisk into and out of the nocachingiogrp state. (show details) 8.7.0.0 HyperSwap
SVAPAR-115520 All High Importance An unexpected sequence of NVMe host IO commands may trigger a node warmstart. (show details) 8.7.0.0 Hosts, NVMe
SVAPAR-117457 All High Importance A hung condition in Remote Receive IOs (RRI) for volume groups can lead to warmstarts on multiple nodes. (show details) 8.7.0.0 Policy-based Replication
SVAPAR-117768 All High Importance Cloud Callhome may stop working without logging an error (show details) 8.7.0.0 Call Home
SVAPAR-119799 FS9500, SVC High Importance Inter-node resource queuing on SV3 I/O groups, causes high write response time. (show details) 8.7.0.0 Performance
SVAPAR-120599 All High Importance On systems handling a large number of concurrent host I/O requests, a timing window in memory allocation may cause a single node warmstart. (show details) 8.7.0.0 Hosts
SVAPAR-120616 All High Importance After mapping a volume to an NVMe host, a customer is unable to map the same vdisk to a second NVMe host using the GUI, however it is possible using CLI. (show details) 8.7.0.0 Hosts
SVAPAR-120630 All High Importance An MDisk may go offline due to I/O timeouts caused by an imbalanced workload distribution towards the resources in DRP, whilst FlashCopy is running at a high copy rate within DRP, and the target volume is dedup. (show details) 8.7.0.0 Data Reduction Pools
SVAPAR-120631 All High Importance When a user deletes a vdisk, and if 'chfcmap' is run afterwards against the same vdisk ID, a system recovery may occur. (show details) 8.7.0.0 FlashCopy
SVAPAR-127063 All High Importance Degraded Remote Copy performance on systems with multiple IO groups running 8.5.0.11 or 8.6.0.3 after a node restarts (show details) 8.7.0.0 Global Mirror, Global Mirror With Change Volumes, HyperSwap, Metro Mirror, Performance
SVAPAR-127825 All High Importance Due to an issue with the Fibre Channel adapter firmware the node may warmstart (show details) 8.7.0.0 Fibre Channel
SVAPAR-127841 All High Importance A slow I/O resource leak may occur when using FlashCopy, and the system is under high workload. This may cause a node warmstart to occur (show details) 8.7.0.0 FlashCopy
SVAPAR-127845 All High Importance Attempting to create a second I/O group, in the two `Caching I/O Group` dropdowns on the `Define Volume Properties` modal of `Create Volumes` results in error `CMMVC8709E the iogroups of cache memory storage are not in the same site as the storage groups`. (show details) 8.7.0.0 GUI Fix Procedure, Graphical User Interface
SVAPAR-127869 All High Importance Multiple node warmstarts may occur, due to a rarely seen timing window, when quorum disk I/O is submitted but there is no backend mdisk Logical Unit association that has been discovered by the agent for that quorum disk. (show details) 8.7.0.0 Quorum
SVAPAR-127871 All High Importance When performing a manual upgrade of the AUX cluster from 8.1.1.2 to 8.2.1.12, 'lsupdate' incorrectly reports that the code level is 7.7.1.5 (show details) 8.7.0.0 System Update
SVAPAR-128914 All High Importance A CMMVC9859E error will occur when trying to use 'addvolumecopy' to create Hyperswap volume from a VDisk with existing snapshots (show details) 8.7.0.0 HyperSwap
SVAPAR-129318 All High Importance A Storage Virtualize cluster configured without I/O group 0 is unable to send performance metrics (show details) 8.7.0.0 Performance
SVAPAR-131233 SVC High Importance In an SVC stretched-cluster configuration with multiple I/O groups and policy-based replication, an attempt to create a new volume may fail due to an incorrect automatic I/O group assignment. (show details) 8.7.0.0 Policy-based Replication
SVAPAR-131651 All High Importance Policy-based Replication got stuck after both nodes in the I/O group on a target system restarted at the same time (show details) 8.7.0.0 Policy-based Replication
SVAPAR-132013 All High Importance On a Hyperswap system, the preferred site node can lease expire if the remote site nodes suffered a warmstart. (show details) 8.7.0.0 HyperSwap, Quorum
SVAPAR-137096 All High Importance An issue with the TPM on FS50xx may cause a chsystemcert command to fail. (show details) 8.7.0.0 Command Line Interface
SVAPAR-141996 All High Importance Policy-based replication may not perform the necessary background synchronization to maintain an up to date copy of data at the DR site. (show details) 8.7.0.0 Policy-based Replication
SVAPAR-142191 All High Importance When a child pool contains thin-provisioned volumes, running out of space in the child pool may cause volumes outside the child pool to be taken offline. (show details) 8.7.0.0 Thin Provisioning
SVAPAR-98497 All High Importance Excessive SSH logging may cause the Configuration node boot drive to become full. The node will go offline with error 565, indicating a boot drive failure (show details) 8.7.0.0 System Monitoring
SVAPAR-98893 All High Importance If an external storage controller has over-provisioned storage (for example a FlashSystem with an FCM array), the system may incorrectly display usable capacity data for mdisks from that controller. If connectivity to the storage controller is lost, node warmstarts may occur (show details) 8.7.0.0 Storage Virtualisation
SVAPAR-99175 All High Importance A node may warmstart due to an invalid queuing mechanism in cache. This can cause IO in cache to be in the same processing queue more than once. (show details) 8.7.0.0 Cache
SVAPAR-99354 All High Importance Missing policing in the 'startfcconsistgrp' command for volumes using volume group snapshots, resulting in node warmstarts when creating a new volume group snapshot (show details) 8.7.0.0 FlashCopy
SVAPAR-99537 All High Importance If a hyperswap volume copy is created in a DRP child pool, and the parent pool has FCM storage, the change volumes will be created as thin-provisioned instead of compressed (show details) 8.7.0.0 Data Reduction Pools
SVAPAR-99997 All High Importance Creating a volume group from a snapshot whose index is greater than 255 may cause incorrect output from 'lsvolumegroup' (show details) 8.7.0.0 FlashCopy
HU01222 All Suggested FlashCopy entries in the eventlog always have an object ID of 0, rather then show the correct object ID (show details) 8.7.0.0 FlashCopy
SVAPAR-100924 FS9500, SVC Suggested After the battery firmware is updated, either using the utility or by upgrading to a version with newer firmware, the battery LED may be falsely illuminated. (show details) 8.7.0.0
SVAPAR-100958 All Suggested A single FCM may incorrectly report multiple medium errors for the same LBA (show details) 8.7.0.0 RAID
SVAPAR-102271 All Suggested Enable IBM Storage Defender integration for Data Reduction Pools (show details) 8.7.0.0 Interoperability
SVAPAR-102382 All Suggested Fibre Channel Read Diagnostic Parameters (RDP) indicates that a short wave SFP is installed when infact an long wave SFP is installed. (show details) 8.7.0.0 System Monitoring
SVAPAR-106693 FS9500 Suggested Remote Support Assistance (RSA) cannot be enabled on FS9500 systems with MTM 4983-AH8 (show details) 8.7.0.0 Support Remote Assist
SVAPAR-107558 All Suggested A Volume Group Snapshot (VGS) trigger may collide with a GMCV or Policy based Replication cycle causing the VGS trigger to fail. (show details) 8.7.0.0 FlashCopy, Global Mirror With Change Volumes, Policy-based Replication
SVAPAR-107733 All Suggested The 'mksnmpserver' command fails with 'CMMVC5711E [####] is not valid data' if auth passphrase contains special characters, such as '!' (show details) 8.7.0.0
SVAPAR-107852 All Suggested A Policy-Based High Availability node may warmstart during IP quorum disconnect and reconnect operations. (show details) 8.7.0.0 IP Quorum
SVAPAR-108469 All Suggested A single node warmstart may occur on nodes configure to use a secured IP partnership (show details) 8.7.0.0 IP Replication
SVAPAR-108476 All Suggested Remote users with public SSH keys configured cannot failback to password authentication. (show details) 8.7.0.0 Security
SVAPAR-108551 All Suggested An expired token in the GUI file upload process can cause the upgrade to not start automatically after the file is successfully uploaded. (show details) 8.7.0.0 System Update
SVAPAR-109289 All Suggested Buffer overflow may occur when handling the maximum length of 55 characters for either Multi-Factor Authentication (MFA) or Single Sign On (SSO) client secrets (show details) 8.7.0.0 Backend Storage
SVAPAR-110059 All Suggested When using Storage Insights without a data collector, an attempt to collect a snap using Storage Insights may fail. (show details) 8.7.0.0 Support Data Collection
SVAPAR-110309 All Suggested When a volume group is assigned to an ownership group, and has a snapshot policy associated, running the 'lsvolumegroupsnapshotpolicy' or 'lsvolumegrouppopulation' command whilst logged in as an ownership group user, will cause a Config node to warmstart. (show details) 8.7.0.0 FlashCopy
SVAPAR-110745 All Suggested Policy-based Replication (PBR) snapshots and Change Volumes are factored into the preferred node assignment. This can lead to a perceived imbalance of the distribution of preferred node assignments. (show details) 8.7.0.0 Policy-based Replication
SVAPAR-110749 All Suggested There is an issue when configuring volumes using the wizard, the underlying command that is called is the 'mkvolume' command rather than the previous 'mkvdisk' command. With 'mkvdisk' it was possible to format the volumes, whereas with 'mkvolume' it is not possible (show details) 8.7.0.0
SVAPAR-111021 All Suggested Unable to load resource page in GUI if the IO group ID:0 does not have any nodes. (show details) 8.7.0.0 System Monitoring
SVAPAR-111187 All Suggested There is an issue if the browser language is set to French, that can cause the SNMP server creation wizard not to be displayed. (show details) 8.7.0.0 System Monitoring
SVAPAR-111239 All Suggested In rare situations it is possible for a node running Global Mirror with Change Volumes (GMCV) to assert (show details) 8.7.0.0 Global Mirror With Change Volumes
SVAPAR-111989 All Suggested Downloading software with a Fix ID longer than 64 characters fails with an error (show details) 8.7.0.0 Support Remote Assist
SVAPAR-111991 All Suggested Attempting to create a truststore fails with a CMMVC5711E error if the certificate file does not end with a newline character (show details) 8.7.0.0 IP Replication, Policy-based Replication, vVols
SVAPAR-111992 All Suggested Unable to configure policy-based Replication using the GUI, if truststore contains blank lines or CRLF line endings (show details) 8.7.0.0 Graphical User Interface, Policy-based Replication
SVAPAR-112243 All Suggested Prior to 8.4.0 NTP was used. After 8.4.0 this was changed to 'chronyd'. When upgrading from a lower level to 8.4 or higher, systems may experience compatibility issues. (show details) 8.7.0.0
SVAPAR-112711 All Suggested IBM Storage Virtualize user interface code will not respond to a malformed HTTP POST with expected HTTP 401 message. (show details) 8.7.0.0 Graphical User Interface
SVAPAR-112712 SVC Suggested The Cloud Call Home function will not restart on SVC clusters that were initially created with CG8 hardware and upgraded to 8.6.0.0 and above. (show details) 8.7.0.0 Call Home
SVAPAR-113792 All Suggested Node assert may occur when outbound IPC message such as nslookup to a DNS server timeouts (show details) 8.7.0.0
SVAPAR-114081 All Suggested The lsfabric command may show FC port logins which no longer exist. In large environments with many devices attached to the SAN, this may result in an incorrect 1800 error being reported, indicating that a node has too many logins. (show details) 8.7.0.0 Reliability Availability Serviceability
SVAPAR-114086 SVC Suggested Incorrect IO group memory policing for volume mirroring in the GUI for SVC SV3 hardware. (show details) 8.7.0.0 Volume Mirroring
SVAPAR-114145 All Suggested A timing issue triggered by disabling an IP partnership's compression state while replication is running may cause a node to warmstart. (show details) 8.7.0.0 IP Replication
SVAPAR-116265 All Suggested When upgrading memory on a node, it may repeatedly reboot if not removed from the cluster before shutting the node down and adding additional memory. (show details) 8.7.0.0 Reliability Availability Serviceability
SVAPAR-117663 All Suggested The last backup time for a safeguarded volume group within the Volume Groups view does not display the correct time. (show details) 8.7.0.0 Graphical User Interface
SVAPAR-120156 FS5000, FS5100, FS5200, FS7200, FS7300, SVC Suggested An internal process introduced in 8.6.0 to collect iSCSI port statistics can cause host performance to be affected (show details) 8.7.0.0 Performance, iSCSI
SVAPAR-120359 All Suggested Single node warmstart when using FlashCopy maps on volumes configured for Policy-based Replication (show details) 8.7.0.0 FlashCopy, Policy-based Replication
SVAPAR-120399 All Suggested A host WWPN incorrectly shows as being still logged into the storage when it is not. (show details) 8.7.0.0 Reliability Availability Serviceability
SVAPAR-120495 All Suggested A node can experience performance degradation, if using the embedded VASA provider, thereby leading to a potential single node warmstart. (show details) 8.7.0.0
SVAPAR-120639 All Suggested The vulnerability scanner claims cookies were set without HttpOnly flag. (show details) 8.7.0.0
SVAPAR-120732 All Suggested Unable to expand vdisk from GUI as constant values for the compressed and regular pool volume disk maximum capacity were incorrect in the constant file. (show details) 8.7.0.0 Graphical User Interface
SVAPAR-120925 All Suggested A single node assert may occur due to a timing issue related to thin provisioned volumes in a traditional pool. (show details) 8.7.0.0 Thin Provisioning
SVAPAR-121334 All Suggested Packets with unexpected size are received on the ethernet interface. This causes the internal buffers to become full, thereby causing a node to warmstart to clear the condition (show details) 8.7.0.0 NVMe
SVAPAR-122411 All Suggested A node may assert when a vdisk has been expanded and rehome has not made aware of the possible change of the number of regions it may have to rehome. (show details) 8.7.0.0 Data Reduction Pools
SVAPAR-123644 All Suggested A system with NVMe drives may falsely log an error indicating a Flash drive has high write endurance usage. The error cannot be cleared. (show details) 8.7.0.0 Reliability Availability Serviceability
SVAPAR-126742 All Suggested A 3400 error (too many compression errors) may be logged incorrectly, due to an incorrect threshold. The error can be ignored on code levels which do not contain this fix. (show details) 8.7.0.0 Compression, Data Reduction Pools
SVAPAR-127835 All Suggested A node may warmstart due to invalid RDMA receive size of zero. (show details) 8.7.0.0 NVMe
SVAPAR-127844 All Suggested The user is informed that a snapshot policy cannot be assigned. The error message CMMVC9893E is displayed. (show details) 8.7.0.0 FlashCopy
SVAPAR-128010 FS7300, FS9500 Suggested A node warmstart can sometimes occur due to a timeout on certain fibre channel adapters (show details) 8.7.0.0 Fibre Channel
SVAPAR-128414 FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500, SVC Suggested Thin-clone volumes in a Data Reduction Pool will incorrectly have compression disabled, if the source volume was uncompressed. (show details) 8.7.0.0 Compression, FlashCopy
SVAPAR-129111 All Suggested When using the GUI, the IPV6 field is not wide enough, thereby causing the user to scroll right to see the full IPV6 address. (show details) 8.7.0.0 Graphical User Interface
SVAPAR-130646 All Suggested False positive Recovery point Objective (RPO) exceeded events (52004) reported for volume groups configured with Policy-Based Replication (show details) 8.7.0.0 Policy-based Replication
SVAPAR-131212 All Suggested The GUI partnership properties dialog crashes if the issuer certificate does not have an organization field (show details) 8.7.0.0 Policy-based Replication
SVAPAR-131250 All Suggested The system may not correctly balance fibre channel workload over paths to a back end controller. (show details) 8.7.0.0 Backend Storage
SVAPAR-131807 All Suggested The orchestrator for Policy-Based Replication is not running, preventing replication from being configured. Attempting to configure replication may cause a single node warmstart. (show details) 8.7.0.0 Policy-based Replication
SVAPAR-131865 All Suggested A system may encounter communication issues when being configured with IPv6. (show details) 8.7.0.0
SVAPAR-131993 All Suggested The IPV6 GUI field has been extended to accomodate the full length of the IPV6 address. (show details) 8.7.0.0 Graphical User Interface
SVAPAR-131994 All Suggested When implementing Safeguarded Copy, the associated child pool may run out of space, which can cause multiple Safeguarded Copies to go offline. This can cause the node to warmstart. (show details) 8.7.0.0 Safeguarded Copy & Safeguarded Snapshots
SVAPAR-132001 All Suggested Unexpected lease expiries may occur when half of the nodes in the system start up, one after another in a short time. (show details) 8.7.0.0
SVAPAR-132003 All Suggested A node may warmstart when an internal process to collect information from Ethernet ports takes longer than expected.. (show details) 8.7.0.0 IP Replication, iSCSI
SVAPAR-132011 All Suggested In rare situations, a hosts WWPN may show incorrectly as still logged into the storage even though it is not. This can cause the host to incorrectly appear as degraded. (show details) 8.7.0.0 Fibre Channel, Reliability Availability Serviceability
SVAPAR-132062 All Suggested vVols are reported as inaccessible due to a 30 minute timeout if the VASA provider is unavailable (show details) 8.7.0.0 vVols
SVAPAR-132072 All Suggested A node may assert due to a Fibre Channel port constantly flapping between the FlashSystem and the host. (show details) 8.7.0.0 Fibre Channel
SVAPAR-143574 All Suggested It is possible for a battery register read to fail, causing a battery to unexpectedly be reported as offline. The issue will persist until the node is rebooted. (show details) 8.7.0.0 Reliability Availability Serviceability
SVAPAR-89271 All Suggested Policy-based Replication is not achieving the link_bandwidth_mbits configured on the partnership if only a single volume group is replicating in an I/O group, or workload is not balanced equally between volume groups owned by both nodes. (show details) 8.7.0.0 Policy-based Replication
SVAPAR-95384 All Suggested In very rare circumstances, a timing window may cause a single node warmstart when creating a volume using policy-based replication (show details) 8.7.0.0 Policy-based Replication
SVAPAR-96777 All Suggested Policy-based Replication uses journal resources to handle replication. If these resources become exhausted, the volume groups with the highest RPO and most amount of resources should be purged to free up resources for other volume groups. The decision which volume groups to purge is made incorrectly, potentially causing too many volume groups to exceed their target RPO (show details) 8.7.0.0 Policy-based Replication
SVAPAR-96952 All Suggested A single node warmstart may occur when updating the login counts associated with a backend controller. (show details) 8.7.0.0 Backend Storage
SVAPAR-97502 All Suggested Configurations that use Policy-based Replication with standard pool change volumes will raise space usage warnings (show details) 8.7.0.0 Policy-based Replication
SVAPAR-98128 All Suggested A single node warmstart may occur on upgrade to 8.6.0.0, on SA2 nodes with 25Gb ethernet adapters (show details) 8.7.0.0 System Update
SVAPAR-98576 All Suggested Customers cannot edit certain properties of a flashcopy mapping via the GUI flashcopy mappings panel as the edit modal does not appear. (show details) 8.7.0.0 FlashCopy, Graphical User Interface
SVAPAR-98611 All Suggested The system returns an incorrect retry delay timer for a SCSI BUSY status response to AIX hosts when an attempt is made to access a VDisk that is not mapped to the host (show details) 8.7.0.0 Interoperability

4. Useful Links

Description Link
Product Documentation
Concurrent Compatibility and Code Cross Reference
Support Information pages providing links to the following information:
  • Interoperability information
  • Product documentation
  • Configuration Limits
IBM Storage Virtualize Policy-based replication and High Availability Compatibility Cross Reference
Storage Virtualize Family of Products Inter-System Metro Mirror and Global Mirror Compatibility Cross Reference
Software Upgrade Test Utility
Software Upgrade Planning
IBM Storage Virtualize IP quorum application requirements