Release Note for systems built with IBM Storage Virtualize


This is the release note for the 8.6.3 release and details the issues resolved in 8.6.3.0. This document will be updated with additional information whenever a PTF is released.

Note: This release is a Non-Long Term Support (Non-LTS) release. Non-LTS code levels will not receive regular fix packs. Fixes for issues introduced in a Non-LTS release will be delivered in a subsequent Non-LTS or LTS release. If issues are encountered the only resolution is likely to be upgrade to a later LTS or Non-LTS release.

This document was last updated on 28 October 2024.

  1. New Features
  2. Known Issues and Restrictions
  3. Issues Resolved
    1. Security Issues Resolved
    2. APARs Resolved
  4. Useful Links

Note. Detailed build version numbers are included in the Update Matrices in the Useful Links section.


1. New Features

The following new features have been introduced in the 8.6.3.0 release:

2. Known Issues and Restrictions

Note: For clarity, the terms "node" and "canister" are used interchangeably.
Details Introduced

Support for Java 7 has now been removed. The IP quorum application now requires Java version 8 or higher.

8.6.3.0

The three site orchestrator is not compatible with the new SSH security level 4.This will be resolved in a future release of the orchestrator.

8.6.2.1

Due to a known issue, array expansion of DRAID1 arrays is not supported on this code level.

8.6.2.0

VMware Virtual Volumes (vVols) are not supported using IBM Spectrum Connect on 8.6.1 or later.

Systems using Spectrum Connect must migrate to use the embedded VASA provider on version 8.6.0 before upgrading to 8.6.1 or later.

8.6.1.0

Systems using VMware Virtual Volumes (vVols) may require reconfiguration before updating to 8.6.1 or later.

Refer to Updating to Storage Virtualize 8.6.1 or later using VMware Virtual Volumes (vVols)

8.6.1.0

All IO groups must be configured with FC target port mode set to 'enabled' before upgrading to 8.6.1.0 or later.

Enabling the FC target port mode configures multiple WWPNs per port (using the Fibre Channel NPIV technology), and separates the host traffic from all other traffic on different WWPNs

Important: Changing this setting is likely to require changes to zoning, and rescanning LUNs in all applications.

The product documentation contains topics called 'Enabling NPIV on an existing system' and 'Enabling NPIV on a new system' which contains details about how to make the changes.

8.6.1.0

8.6.1 removes support for the CIM protocol. Applications that connect using the CIM protocol should be upgraded to use a supported interface, such as the REST interface.

IBM recommends that any product teams currently using CIM protocol comment on this Idea and IBM will contact you with more details about how to use a supported interface.

8.6.1.0

Systems using 3-site orchestrator cannot upgrade to 8.7.0.0

8.6.1.0

The following restrictions were valid but have now been lifted

Removing a vVol child pool was not supported.

This restriction has been lifted in 8.6.3.0

8.6.2.0

IBMi was not supported as a host operating system when connected using directly attached Fibre Channel.

This restriction has now been lifted in 8.6.3.0.

8.6.1.0

3. Issues Resolved

This release contains all of the fixes included in the 8.6.0.0, 8.6.1.0 and 8.6.2.0 releases, plus the following additional fixes.

A release may contain fixes for security issues, fixes for APARs or both. Consult both tables below to understand the complete set of fixes included in the release.

3.1 Security Issues Resolved

Security issues are documented using a reference number provided by "Common Vulnerabilities and Exposures" (CVE).
CVE Identifier Link for additional Information Resolved in
CVE-2023-48795 7154643 8.6.3.0

3.2 APARs Resolved

Show details for all APARs
APAR Affected Products Severity Description Resolved in Feature Tags
SVAPAR-117738 All HIPER The configuration node may go offline with node error 565, due to a full /tmp partition on the boot drive. (show details)
Symptom Loss of Access to Data
Environment Systems running 8.6.2
Trigger None
Workaround Reboot the node to bring it online.
8.6.3.0 Reliability Availability Serviceability
SVAPAR-112707 SVC Critical Marking error 3015 as fixed on a SVC cluster containing SV3 nodes may cause a loss of access to data. For more details refer to this Flash (show details)
Symptom Loss of Access to Data
Environment Systems containing 214x-SV3 nodes that have been downgraded from 8.6 to 8.5
Trigger Marking 3015 error as fixed
Workaround Do not attempt to repair the 3015 error, contact IBM support
8.6.3.0 Reliability Availability Serviceability
SVAPAR-112939 All Critical A loss of disk access on one pool may cause IO to hang on a different pool due to a cache messaging hang. (show details)
Symptom Loss of Access to Data
Environment System with multiple storage pools.
Trigger Loss of disk access to one pool.
Workaround None
8.6.3.0 Cache
SVAPAR-115505 All Critical Expanding a volume in a Flashcopy map and then creating a dependent incremental forward and reverse Flashcopy map may cause a dual node warmstart when the incremental map is started. (show details)
Symptom Loss of Access to Data
Environment Systems using incremental reverse Flashcopy mappings.
Trigger Expanding a volume in a Flashcopy map and then creating and starting a dependent incremental forward and reverse Flashcopy map.
Workaround None
8.6.3.0 FlashCopy
SVAPAR-120391 All Critical Removing an incremental Flashcopy mapping from a consistency group, after there was a previous error when starting the Flashcopy consistency group that caused a node warmstart, may trigger additional node asserts. (show details)
Symptom Multiple Node Warmstarts
Environment Systems using incremental copy consistency groups.
Trigger Removing an incremental Flashcopy mapping from a consistency group after, there was a previous error when starting the Flashcopy consistency group that caused a node warmstart.
Workaround None
8.6.3.0 FlashCopy
SVAPAR-120397 All Critical A node may not shutdown cleanly on loss of power if it contains 25Gb Ethernet adapters, necessitating a system recovery. (show details)
Symptom Loss of Access to Data
Environment Systems with 25Gb Ethernet adapters.
Trigger Loss of power to the system.
Workaround None
8.6.3.0 Reliability Availability Serviceability
SVAPAR-141094 All Critical On power failure, FS50xx systems with a 25Gb ROCE adapters may fail to gracefully shutdown, causing loss of cache data. (show details)
Symptom Loss of Access to Data
Environment FS50xx systems with a 25Gb ROCE adapters
Trigger Power failure
Workaround None
8.6.3.0 Reliability Availability Serviceability
SVAPAR-109385 All High Importance When one node has a hardware fault involving a faulty PCI switch, the partner node can repeatedly assert until it enters a 564 status resulting in an outage (show details)
Symptom Loss of Access to Data
Environment Any Flashsystem
Trigger This can occur during upgrade, however this aspect is to be confirmed
Workaround Remove the failing node and then reboot the asserting node
8.6.3.0
SVAPAR-111812 All High Importance Systems with 8.6.0 or later software may fail to complete lsvdisk commands, if a single SSH session runs multiple lsvdisk commands piped to each other. This can lead to failed login attempts for the GUI and CLI, and is is more likely to occur if the system has more than 400 volumes. (show details)
Symptom Configuration
Environment Systems with 8.6.0 or later software.
Trigger Unusual use of nested svcinfo commands on the CLI.
Workaround Avoid nested svcinfo commands.
8.6.3.0 Command Line Interface
SVAPAR-112856 All High Importance Conversion of Hyperswap volumes to 3 site consistency groups will increase write response time of the Hyperswap volumes. (show details)
Symptom Performance
Environment Any system running Hyperswap and 3-Site
Trigger Conversion of Hyperswap to 3 site consistency groups
Workaround Manually increase rsize of Hyperswap change volumes before conversion to 3 site consistency groups
8.6.3.0 3-Site using HyperSwap or Metro Mirror, HyperSwap
SVAPAR-115021 All High Importance Software validation checks can trigger a T2 recovery when attempting to move a Hyperswap vdisk into and out of the nocachingiogrp state. (show details)
Symptom Loss of Access to Data
Environment Any system that is configured for hyperswap
Trigger Invoking 'movevdisk' command with the '-nocachingiogrp' flag in a Hyperswap environment
Workaround None
8.6.3.0 HyperSwap
SVAPAR-117457 All High Importance A hung condition in Remote Receive IOs (RRI) for volume groups can lead to warmstarts on multiple nodes. (show details)
Symptom Multiple Node Warmstarts
Environment Any system that uses Policy-based Replication
Trigger None
Workaround None
8.6.3.0 Policy-based Replication
SVAPAR-117768 All High Importance Cloud Callhome may stop working without logging an error (show details)
Symptom Configuration
Environment 8.6.0 or higher Systems sending data to Storage Insights without using the data collector are most likely to hit this issue
Trigger None
Workaround Cloud callhome can be disabled then re-enabled to restart the callhome if it has failed.
8.6.3.0 Call Home
SVAPAR-120599 All High Importance On systems handling a large number of concurrent host I/O requests, a timing window in memory allocation may cause a single node warmstart. (show details)
Symptom Multiple Node Warmstarts
Environment Systems running 8.6.2.0
Trigger Very high I/O workload
Workaround None
8.6.3.0 Hosts
SVAPAR-120616 All High Importance After mapping a volume to an NVMe host, a customer is unable to map the same vdisk to a second NVMe host using the GUI, however it is possible using CLI. (show details)
Symptom None
Environment Any system where the same vdisks are mapped to different NVMe hosts via GUI can hit this issue.
Trigger If the same vdisk is mapped to different NVMe hosts via GUI.
Workaround Use the CLI
8.6.3.0 Hosts
SVAPAR-120630 All High Importance An MDisk may go offline due to I/O timeouts caused by an imbalanced workload distribution towards the resources in DRP, whilst FlashCopy is running at a high copy rate within DRP, and the target volume is dedup. (show details)
Symptom Offline Volumes
Environment Any system running FlashCopy, with a deduplicated target volume in DRP.
Trigger None
Workaround None
8.6.3.0 Data Reduction Pools
SVAPAR-120631 All High Importance When a user deletes a vdisk, and if 'chfcmap' is run afterwards against the same vdisk ID, a system recovery may occur. (show details)
Symptom Loss of Access to Data
Environment Any system configured with FlashCopy
Trigger Running the 'chfcmap' command against a deleting vdisk.
Workaround Do not run 'chfcmap' against a deleting vdisk ID.
8.6.3.0 FlashCopy
HU01222 All Suggested FlashCopy entries in the eventlog always have an object ID of 0, rather then show the correct object ID (show details)
Symptom None
Environment Any system configured with FlashCopy groups
Trigger None
Workaround Use the 'Info' event nearest to the 'config' event to determine which fcgrp was stopped.
8.6.3.0 FlashCopy
SVAPAR-112712 SVC Suggested The Cloud Call Home function will not restart on SVC clusters that were initially created with CG8 hardware and upgraded to 8.6.0.0 and above. (show details)
Symptom None
Environment SVC cluster that has been upgraded from CG8 hardware.
Trigger Upgrading SVC cluster
Workaround None
8.6.3.0 Call Home
SVAPAR-113792 All Suggested Node assert may occur when outbound IPC message such as nslookup to a DNS server timeouts (show details)
Symptom Single Node Warmstart
Environment Any system running 8.6.0.x or higher
Trigger None
Workaround None
8.6.3.0
SVAPAR-114086 SVC Suggested Incorrect IO group memory policing for volume mirroring in the GUI for SVC SV3 hardware. (show details)
Symptom Configuration
Environment 2145-SV3 hardware
Trigger Attempting to increase volume mirroring memory allocation in the GUI.
Workaround Perform the action via the CLI instead.
8.6.3.0 Volume Mirroring
SVAPAR-116265 All Suggested When upgrading memory on a node, it may repeatedly reboot if not removed from the cluster before shutting the node down and adding additional memory. (show details)
Symptom Multiple Node Warmstarts
Environment GEN3 or newer node hardware.
Trigger Not first removing the node from the cluster before shutting it down and adding additional memory.
Workaround Remove the node first from the cluster before shutting it down and adding additional memory.
8.6.3.0 Reliability Availability Serviceability
SVAPAR-117663 All Suggested The last backup time for a safeguarded volume group within the Volume Groups view does not display the correct time. (show details)
Symptom None
Environment None
Trigger None
Workaround None
8.6.3.0 Graphical User Interface
SVAPAR-120359 All Suggested Single node warmstart when using FlashCopy maps on volumes configured for Policy-based Replication (show details)
Symptom Single Node Warmstart
Environment Systems using FlashCopy maps on volumes configured for Policy-based Replication
Trigger The single node warmstart has a low risk of occurring if policy-based replication runs in cycling mode.
Workaround Make volume groups with replication policies independent, or stop the partnership
8.6.3.0 FlashCopy, Policy-based Replication
SVAPAR-120399 All Suggested A host WWPN incorrectly shows as being still logged into the storage when it is not. (show details)
Symptom Configuration
Environment Systems using Fibre Channel host connections.
Trigger Disabling or removing a host fibre channel connection.
Workaround None
8.6.3.0 Reliability Availability Serviceability
SVAPAR-120495 All Suggested A node can experience performance degradation, if using the embedded VASA provider, thereby leading to a potential single node warmstart. (show details)
Symptom Single Node Warmstart
Environment Systems running with Embedded VASA provider.
Trigger None
Workaround None
8.6.3.0
SVAPAR-120610 All Suggested Excessive 'chfcmap' commands can result in multiple node warmstarts occurring (show details)
Symptom Multiple Node Warmstarts
Environment Any systems configured with flashcopy.
Trigger Performing excessive 'chfcmap' commands
Workaround None
8.6.3.0 FlashCopy
SVAPAR-120639 All Suggested The vulnerability scanner claims cookies were set without HttpOnly flag. (show details)
Symptom Configuration
Environment On port 442, the secure flag from SSL cookie is not set from SSL cookie and the HttpOnly Flag is not set from the cookie.
Trigger None
Workaround None
8.6.3.0
SVAPAR-120732 All Suggested Unable to expand vdisk from GUI as constant values for the compressed and regular pool volume disk maximum capacity were incorrect in the constant file. (show details)
Symptom Configuration
Environment IBM FlashSystem
Trigger None
Workaround Perform the action via the CLI
8.6.3.0 Graphical User Interface
SVAPAR-120925 All Suggested A single node assert may occur due to a timing issue related to thin provisioned volumes in a traditional pool. (show details)
Symptom Single Node Warmstart
Environment Systems with thin provisioned volumes in a traditional pool.
Trigger None
Workaround None
8.6.3.0 Thin Provisioning

4. Useful Links

Description Link
Product Documentation
Update Matrices, including detailed build version
Support Information pages providing links to the following information:
  • Interoperability information
  • Product documentation
  • Limitations and restrictions, including maximum configuration limits
Storage Virtualize Family of Products Inter-System Metro Mirror and Global Mirror Compatibility Cross Reference
Software Upgrade Test Utility
Software Upgrade Planning