Release Note for systems built with IBM Storage Virtualize


This is the release note for the 8.6.1 release and details the issues resolved in 8.6.1.0. This document will be updated with additional information whenever a PTF is released.

Note: This release is a Non-Long Term Support (Non-LTS) release. Non-LTS code levels will not receive regular fix packs. Fixes for issues introduced in a Non-LTS release will be delivered in a subsequent Non-LTS or LTS release. If issues are encountered the only resolution is likely to be upgrade to a later LTS or Non-LTS release.

This document was last updated on 28 October 2024.

  1. New Features
  2. Known Issues and Restrictions
  3. Issues Resolved
    1. Security Issues Resolved
    2. APARs Resolved
  4. Useful Links

Note. Detailed build version numbers are included in the Update Matrices in the Useful Links section.


1. New Features

The following new features have been introduced in the 8.6.1 release:

2. Known Issues and Restrictions

Note: For clarity, the terms "node" and "canister" are used interchangeably.
Details Introduced

Systems with ROCE Adapters cannot have MTU greater than 1500 on 8.6.0.0 or later. The workaround is to reduce MTU to 1500.

This restriction has now been lifted in 8.6.1.0.

Systems with ROCE Adapters using iSCSI protocol can set MTU to 9000. Systems using NVMe-TCP/RDMA protocol still remain restricted to 1500 MTU value.

8.6.0.0

Remote Support Assistance (RSA) cannot be enabled on FS9500 systems with MTM 4983-AH8. Attempting to enable RSA will fail with: CMMVC8292E The command failed because the feature is not supported on this platform.

Customers should contact Support for an ifix

8.6.0.0

Upgrade to 8.5.1 and higher is not currently supported on systems with Data Reduction Pools, to avoid SVAPAR-105430.

On a small proportion of systems, this can cause a node warmstart when specific data patterns are written to compressed volumes. This restriction will be lifted in a later PTF

8.5.1.0

VMware Virtual Volumes (vVols) are not supported using IBM Spectrum Connect on 8.6.1 or later.

Systems using Spectrum Connect must migrate to use the embedded VASA provider on version 8.6.0 before upgrading to 8.6.1 or later.

8.6.1.0

Systems using VMware Virtual Volumes (vVols) may require reconfiguration before updating to 8.6.1 or later.

Refer to Updating to Storage Virtualize 8.6.1 or later using VMware Virtual Volumes (vVols)

8.6.1.0

All IO groups must be configured with FC target port mode set to 'enabled' before upgrading to 8.6.1.0 or later.

Enabling the FC target port mode configures multiple WWPNs per port (using the Fibre Channel NPIV technology), and separates the host traffic from all other traffic on different WWPNs

Important: Changing this setting is likely to require changes to zoning, and rescanning LUNs in all applications.

The product documentation contains topics called 'Enabling NPIV on an existing system' and 'Enabling NPIV on a new system' which contains details about how to make the changes.

8.6.1.0

IBM i is not supported as a host operating system when connected using directly attached Fibre Channel to FlashSystem or SAN Volume Controller systems running 8.6.1.0 or later versions.

IBM i is supported as a host operating system when connected to FlashSystem or SAN Volume Controller systems via a Fibre Channel switch

8.6.1.0

8.6.1 removes support for the CIM protocol. Applications that connect using the CIM protocol should be upgraded to use a supported interface, such as the REST interface.

IBM recommends that any product teams currently using CIM protocol comment on this Idea and IBM will contact you with more details about how to use a supported interface.

8.6.1.0

Mapping more than 512 volumes to a single host might fail with CMMVC5876E after upgrading to 8.6.1.0, even if the configuration is supported.

Warmstarting any node in the system will resolve the issue, allowing the supported number of volume mappings per host object to be created.

8.6.1.0

8.6.1 introduces support for Policy-based High Availability, using two storage systems. Asynchronous Policy-based Replication is not supported in this release and systems

requiring this feature should use 8.6.0.

8.6.1.0

Systems using 3-site orchestrator cannot upgrade to 8.7.0.0

8.6.1.0

The following restrictions were valid but have now been lifted


3. Issues Resolved

This release contains all of the fixes included in the 8.6.0.0 release, plus the following additional fixes.

A release may contain fixes for security issues, fixes for APARs or both. Consult both tables below to understand the complete set of fixes included in the release.

3.1 Security Issues Resolved

Security issues are documented using a reference number provided by "Common Vulnerabilities and Exposures" (CVE).
CVE Identifier Link for additional Information Resolved in

3.2 APARs Resolved

Show details for all APARs
APAR Affected Products Severity Description Resolved in Feature Tags
SVAPAR-94179 FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500 HIPER Faulty hardware within or connected to the CPU can result in a reboot on the affected node. However it is possible for this to sometimes result in a reboot on the partner node (show details)
Symptom Loss of Access to Data
Environment All Flashsystems and V7000 Gen3, but not SVC
Trigger Node hardware fault
Workaround None
8.6.1.0 Reliability Availability Serviceability
HU02585 All Critical An unstable connection between the Storage Virtualize system and an external virtualized storage system can sometimes result in a cluster recovery occurring (show details)
Symptom Multiple Node Warmstarts
Environment None
Trigger An unstable connection between the Storage Virtualize system and an external virtualized storage system can cause objects to be discovered out of order, resulting in a cluster recovery
Workaround Stabilise the SAN fabric by replacing any failing hardware, such as a faulty SFP
8.6.1.0 Backend Storage
SVAPAR-100127 All Critical The Service Assistant GUI Node rescue option incorrectly performs the node rescue on the local node instead of the node selected in the GUI. (show details)
Symptom Single Node Warmstart
Environment Any cluster running on 8.5.0.0 code and above.
Trigger This problem can happen if the user is on the Service Assistant GUI of a node but selects another node for node rescue. The Node rescue will perform on the local node they are on and not the node selected
Workaround Use the CLI 'satask rescuenode -force <node-panel-id>' command to select the correct node to perform the node rescue on, or log onto the Service GUI of the node that is requiring a node rescue if it is accessible, that way the node in need will be the local node
8.6.1.0 Graphical User Interface
SVAPAR-100564 All Critical On code level 8.6.0.0, multiple node warmstarts will occur if a user attempts to remove the site ID from a host that has Hyperswap volumes mapped to it. (show details)
Symptom Multiple Node Warmstarts
Environment Hyperswap cluster on 8.6.0.0 with Hyperswap volumes mapped to one or more hosts
Trigger Attempting to remove the site ID from a host that has Hyperswap volumes mapped to it
Workaround Convert all the mapped Hyperswap volumes to basic volumes, then remove the site ID
8.6.1.0 HyperSwap
SVAPAR-98184 All Critical When a Volume Group Snapshot clone is added to a replication policy before the clone is complete, the system may repeatedly warmstart when the Policy-based Replication volume group is changed to independent access (show details)
Symptom Loss of Access to Data
Environment Systems using Volume Group Snapshot Clones and Policy-Based Replication
Trigger Changing an affected Policy-Based Replication Volume Group to independent access
Workaround Wait for the clone to complete before adding the volumes to a replication policy
8.6.1.0 FlashCopy, Policy-based Replication
SVAPAR-98612 All Critical Creating a volume group snapshot with an invalid I/O group value may trigger multiple node warmstarts (show details)
Symptom Loss of Access to Data
Environment Systems using volume group snapshots
Trigger Using an invalid I/O group value when creating a volume group snapshot
Workaround Make sure that you specify the correct I/O group value
8.6.1.0 FlashCopy
SVAPAR-100162 All High Importance Some host vendors such as Windows have recently started to use 'mode select page 7'. IBM Storage does not support this mode. If the storage received this mode level it would cause a warmstart to occur (show details)
Symptom Single Node Warmstart
Environment Any cluster running on 8.4.0.0 or higher
Trigger If a host uses 'mode select page 7'
Workaround None
8.6.1.0 Hosts
SVAPAR-100977 All High Importance When a zone containing NVMe devices is enabled, a node warmstart might occur. (show details)
Symptom Single Node Warmstart
Environment Any system running 8.5.0.5
Trigger Enabling a zone with a host that has approximately 1,000 vdisks mapped
Workaround Make sure that the created zone does not contain NVMe devices
8.6.1.0 NVMe
SVAPAR-102573 All High Importance On systems using Policy-Based Replication and Volume Group Snapshots, some CPU cores may have high utilization due to an issue with the snapshot cleaning algorithm. This can impact performance for replication and host I/O (show details)
Symptom Performance
Environment Systems using Policy-Based Replication and Volume Group Snapshots
Trigger Snapshot mappings with low cleaning workload
Workaround None
8.6.1.0 Policy-based Replication
SVAPAR-98497 All High Importance Excessive SSH logging may cause the Configuration node boot drive to become full. The node will go offline with error 565, indicating a boot drive failure (show details)
Symptom Configuration
Environment Any system that is being monitored by an external monitoring systems
Trigger Customers using external monitoring systems such as Zabbix that use SSH to log in multiple times a second maybe effected
Workaround None
8.6.1.0 System Monitoring
SVAPAR-98893 All High Importance If an external storage controller has over-provisioned storage (for example a FlashSystem with an FCM array), the system may incorrectly display usable capacity data for mdisks from that controller. If connectivity to the storage controller is lost, node warmstarts may occur (show details)
Symptom Single Node Warmstart
Environment Systems running 8.6.0.0 only, with an external controller that has over-provisioned storage
Trigger None
Workaround None
8.6.1.0 Storage Virtualisation
SVAPAR-99537 All High Importance If a hyperswap volume copy is created in a DRP child pool, and the parent pool has FCM storage, the change volumes will be created as thin-provisioned instead of compressed (show details)
Symptom Configuration
Environment Systems with DRP child pools and FCM storage
Trigger Creating a change volume in a DRP child pool when the parent pool contains FCMs
Workaround None
8.6.1.0 Data Reduction Pools
SVAPAR-99997 All High Importance Creating a volume group from a snapshot whose index is greater than 255 may cause incorrect output from 'lsvolumegroup' (show details)
Symptom Configuration
Environment Systems using Volume Group Snapshots
Trigger Creating a volume group from a snapshot whose index is greater than 255
Workaround None
8.6.1.0 FlashCopy
SVAPAR-100958 All Suggested A single FCM may incorrectly report multiple medium errors for the same LBA (show details)
Symptom Performance
Environment Predominantly FCM2, but could also affect other FCM generations
Trigger None
Workaround After the problem is detected, manually fail the FCM, format it and then insert back into the array. After the copyback has completed ensure to update all FCMs to the recommended firmware level
8.6.1.0 RAID
SVAPAR-102271 All Suggested Enable IBM Storage Defender integration for Data Reduction Pools (show details)
Symptom None
Environment None
Trigger None
Workaround None
8.6.1.0 Interoperability
SVAPAR-95384 All Suggested In very rare circumstances, a timing window may cause a single node warmstart when creating a volume using policy-based replication (show details)
Symptom Single Node Warmstart
Environment Any system configured with Policy-Based Replication
Trigger Running the 'mkvolume' command
Workaround None
8.6.1.0 Policy-based Replication
SVAPAR-96777 All Suggested Policy-based Replication uses journal resources to handle replication. If these resources become exhausted, the volume groups with the highest RPO and most amount of resources should be purged to free up resources for other volume groups. The decision which volume groups to purge is made incorrectly, potentially causing too many volume groups to exceed their target RPO (show details)
Symptom Loss of Redundancy
Environment Any systems running Policy-based Replication
Trigger Journal purge with Policy-based Replication, e.g. link issue or performance issue
Workaround None
8.6.1.0 Policy-based Replication
SVAPAR-97502 All Suggested Configurations that use Policy-based Replication with standard pool change volumes will raise space usage warnings (show details)
Symptom None
Environment This issue can only be triggered when using Policy-based Replication with standard pools. The issue is not presented within DRP environments
Trigger System that use Policy-based Replication within a standard pool whilst running 8.5.2.0 - 8.6.0.0
Workaround None
8.6.1.0 Policy-based Replication
SVAPAR-98128 All Suggested A single node warmstart may occur on upgrade to 8.6.0.0, on SA2 nodes with 25Gb ethernet adapters (show details)
Symptom Single Node Warmstart
Environment SA2 nodes with a 25Gb ethernet adapters
Trigger Upgrading to 8.6.0.0
Workaround None
8.6.1.0 System Update
SVAPAR-98576 All Suggested Customers cannot edit certain properties of a flashcopy mapping via the GUI flashcopy mappings panel as the edit modal does not appear. (show details)
Symptom Configuration
Environment None
Trigger None
Workaround Use the CLI instead
8.6.1.0 FlashCopy, Graphical User Interface
SVAPAR-98611 All Suggested The system returns an incorrect retry delay timer for a SCSI BUSY status response to AIX hosts when an attempt is made to access a VDisk that is not mapped to the host (show details)
Symptom Loss of Access to Data
Environment AIX hosts
Trigger Trying to access an unmapped VDisk from an AIX host
Workaround None
8.6.1.0 Interoperability

4. Useful Links

Description Link
Product Documentation
Update Matrices, including detailed build version
Support Information pages providing links to the following information:
  • Interoperability information
  • Product documentation
  • Limitations and restrictions, including maximum configuration limits
Storage Virtualize Family of Products Inter-System Metro Mirror and Global Mirror Compatibility Cross Reference
Software Upgrade Test Utility
Software Upgrade Planning