Release Note for systems built with IBM Storage Virtualize


This is the release note for the 8.6.3 release and details the issues resolved in 8.6.3.0. This document will be updated with additional information whenever a PTF is released.

Note: This release is a Non-Long Term Support (Non-LTS) release. Non-LTS code levels will not receive regular fix packs. Fixes for issues introduced in a Non-LTS release will be delivered in a subsequent Non-LTS or LTS release. If issues are encountered the only resolution is likely to be upgrade to a later LTS or Non-LTS release.

This document was last updated on 28 October 2024.

  1. New Features
  2. Known Issues and Restrictions
  3. Issues Resolved
    1. Security Issues Resolved
    2. APARs Resolved
  4. Useful Links

Note. Detailed build version numbers are included in the Update Matrices in the Useful Links section.


1. New Features

The following new features have been introduced in the 8.6.3.0 release:

2. Known Issues and Restrictions

Note: For clarity, the terms "node" and "canister" are used interchangeably.
Details Introduced

Support for Java 7 has now been removed. The IP quorum application now requires Java version 8 or higher.

8.6.3.0

The three site orchestrator is not compatible with the new SSH security level 4.This will be resolved in a future release of the orchestrator.

8.6.2.1

Due to a known issue, array expansion of DRAID1 arrays is not supported on this code level.

8.6.2.0

VMware Virtual Volumes (vVols) are not supported using IBM Spectrum Connect on 8.6.1 or later.

Systems using Spectrum Connect must migrate to use the embedded VASA provider on version 8.6.0 before upgrading to 8.6.1 or later.

8.6.1.0

Systems using VMware Virtual Volumes (vVols) may require reconfiguration before updating to 8.6.1 or later.

Refer to Updating to Storage Virtualize 8.6.1 or later using VMware Virtual Volumes (vVols)

8.6.1.0

All IO groups must be configured with FC target port mode set to 'enabled' before upgrading to 8.6.1.0 or later.

Enabling the FC target port mode configures multiple WWPNs per port (using the Fibre Channel NPIV technology), and separates the host traffic from all other traffic on different WWPNs

Important: Changing this setting is likely to require changes to zoning, and rescanning LUNs in all applications.

The product documentation contains topics called 'Enabling NPIV on an existing system' and 'Enabling NPIV on a new system' which contains details about how to make the changes.

8.6.1.0

8.6.1 removes support for the CIM protocol. Applications that connect using the CIM protocol should be upgraded to use a supported interface, such as the REST interface.

IBM recommends that any product teams currently using CIM protocol comment on this Idea and IBM will contact you with more details about how to use a supported interface.

8.6.1.0

Systems using 3-site orchestrator cannot upgrade to 8.7.0.0

8.6.1.0

The following restrictions were valid but have now been lifted

Removing a vVol child pool was not supported.

This restriction has been lifted in 8.6.3.0

8.6.2.0

IBMi was not supported as a host operating system when connected using directly attached Fibre Channel.

This restriction has now been lifted in 8.6.3.0.

8.6.1.0

3. Issues Resolved

This release contains all of the fixes included in the 8.6.0.0, 8.6.1.0 and 8.6.2.0 releases, plus the following additional fixes.

A release may contain fixes for security issues, fixes for APARs or both. Consult both tables below to understand the complete set of fixes included in the release.

3.1 Security Issues Resolved

Security issues are documented using a reference number provided by "Common Vulnerabilities and Exposures" (CVE).
CVE Identifier
Link for additional Information
Resolved in
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
CVE-2023-48795 7154643 8.6.3.0

3.2 APARs Resolved

Show details for all APARs
APAR
Affected Products
Severity
Description
Resolved in
Feature Tags
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
  • No matches found
SVAPAR-117738 All HIPER The configuration node may go offline with node error 565, due to a full /tmp partition on the boot drive. (show details) 8.6.3.0 Reliability Availability Serviceability
SVAPAR-112707 SVC Critical Marking error 3015 as fixed on a SVC cluster containing SV3 nodes may cause a loss of access to data. For more details refer to this Flash (show details) 8.6.3.0 Reliability Availability Serviceability
SVAPAR-112939 All Critical A loss of disk access on one pool may cause IO to hang on a different pool due to a cache messaging hang. (show details) 8.6.3.0 Cache
SVAPAR-115505 All Critical Expanding a volume in a Flashcopy map and then creating a dependent incremental forward and reverse Flashcopy map may cause a dual node warmstart when the incremental map is started. (show details) 8.6.3.0 FlashCopy
SVAPAR-120391 All Critical Removing an incremental Flashcopy mapping from a consistency group, after there was a previous error when starting the Flashcopy consistency group that caused a node warmstart, may trigger additional node asserts. (show details) 8.6.3.0 FlashCopy
SVAPAR-120397 All Critical A node may not shutdown cleanly on loss of power if it contains 25Gb Ethernet adapters, necessitating a system recovery. (show details) 8.6.3.0 Reliability Availability Serviceability
SVAPAR-141094 All Critical On power failure, FS50xx systems with a 25Gb ROCE adapters may fail to gracefully shutdown, causing loss of cache data. (show details) 8.6.3.0 Reliability Availability Serviceability
SVAPAR-109385 All High Importance When one node has a hardware fault involving a faulty PCI switch, the partner node can repeatedly assert until it enters a 564 status resulting in an outage (show details) 8.6.3.0
SVAPAR-111812 All High Importance Systems with 8.6.0 or later software may fail to complete lsvdisk commands, if a single SSH session runs multiple lsvdisk commands piped to each other. This can lead to failed login attempts for the GUI and CLI, and is is more likely to occur if the system has more than 400 volumes. (show details) 8.6.3.0 Command Line Interface
SVAPAR-112856 All High Importance Conversion of Hyperswap volumes to 3 site consistency groups will increase write response time of the Hyperswap volumes. (show details) 8.6.3.0 3-Site using HyperSwap or Metro Mirror, HyperSwap
SVAPAR-115021 All High Importance Software validation checks can trigger a T2 recovery when attempting to move a Hyperswap vdisk into and out of the nocachingiogrp state. (show details) 8.6.3.0 HyperSwap
SVAPAR-117457 All High Importance A hung condition in Remote Receive IOs (RRI) for volume groups can lead to warmstarts on multiple nodes. (show details) 8.6.3.0 Policy-based Replication
SVAPAR-117768 All High Importance Cloud Callhome may stop working without logging an error (show details) 8.6.3.0 Call Home
SVAPAR-120599 All High Importance On systems handling a large number of concurrent host I/O requests, a timing window in memory allocation may cause a single node warmstart. (show details) 8.6.3.0 Hosts
SVAPAR-120616 All High Importance After mapping a volume to an NVMe host, a customer is unable to map the same vdisk to a second NVMe host using the GUI, however it is possible using CLI. (show details) 8.6.3.0 Hosts
SVAPAR-120630 All High Importance An MDisk may go offline due to I/O timeouts caused by an imbalanced workload distribution towards the resources in DRP, whilst FlashCopy is running at a high copy rate within DRP, and the target volume is dedup. (show details) 8.6.3.0 Data Reduction Pools
SVAPAR-120631 All High Importance When a user deletes a vdisk, and if 'chfcmap' is run afterwards against the same vdisk ID, a system recovery may occur. (show details) 8.6.3.0 FlashCopy
HU01222 All Suggested FlashCopy entries in the eventlog always have an object ID of 0, rather then show the correct object ID (show details) 8.6.3.0 FlashCopy
SVAPAR-112712 SVC Suggested The Cloud Call Home function will not restart on SVC clusters that were initially created with CG8 hardware and upgraded to 8.6.0.0 and above. (show details) 8.6.3.0 Call Home
SVAPAR-113792 All Suggested Node assert may occur when outbound IPC message such as nslookup to a DNS server timeouts (show details) 8.6.3.0
SVAPAR-114086 SVC Suggested Incorrect IO group memory policing for volume mirroring in the GUI for SVC SV3 hardware. (show details) 8.6.3.0 Volume Mirroring
SVAPAR-116265 All Suggested When upgrading memory on a node, it may repeatedly reboot if not removed from the cluster before shutting the node down and adding additional memory. (show details) 8.6.3.0 Reliability Availability Serviceability
SVAPAR-117663 All Suggested The last backup time for a safeguarded volume group within the Volume Groups view does not display the correct time. (show details) 8.6.3.0 Graphical User Interface
SVAPAR-120359 All Suggested Single node warmstart when using FlashCopy maps on volumes configured for Policy-based Replication (show details) 8.6.3.0 FlashCopy, Policy-based Replication
SVAPAR-120399 All Suggested A host WWPN incorrectly shows as being still logged into the storage when it is not. (show details) 8.6.3.0 Reliability Availability Serviceability
SVAPAR-120495 All Suggested A node can experience performance degradation, if using the embedded VASA provider, thereby leading to a potential single node warmstart. (show details) 8.6.3.0
SVAPAR-120610 All Suggested Excessive 'chfcmap' commands can result in multiple node warmstarts occurring (show details) 8.6.3.0 FlashCopy
SVAPAR-120639 All Suggested The vulnerability scanner claims cookies were set without HttpOnly flag. (show details) 8.6.3.0
SVAPAR-120732 All Suggested Unable to expand vdisk from GUI as constant values for the compressed and regular pool volume disk maximum capacity were incorrect in the constant file. (show details) 8.6.3.0 Graphical User Interface
SVAPAR-120925 All Suggested A single node assert may occur due to a timing issue related to thin provisioned volumes in a traditional pool. (show details) 8.6.3.0 Thin Provisioning

4. Useful Links

Description Link
Product Documentation
Update Matrices, including detailed build version
Support Information pages providing links to the following information:
  • Interoperability information
  • Product documentation
  • Limitations and restrictions, including maximum configuration limits
Storage Virtualize Family of Products Inter-System Metro Mirror and Global Mirror Compatibility Cross Reference
Software Upgrade Test Utility
Software Upgrade Planning