SVAPAR-117738 |
All |
HIPER
|
The configuration node may go offline with node error 565, due to a full /tmp partition on the boot drive.
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running 8.6.2 |
Trigger |
None |
Workaround |
Reboot the node to bring it online. |
|
8.6.3.0 |
Reliability Availability Serviceability |
SVAPAR-112707 |
SVC |
Critical
|
Marking error 3015 as fixed on a SVC cluster containing SV3 nodes may cause a loss of access to data. For more details refer to this Flash
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems containing 214x-SV3 nodes that have been downgraded from 8.6 to 8.5 |
Trigger |
Marking 3015 error as fixed |
Workaround |
Do not attempt to repair the 3015 error, contact IBM support |
|
8.6.3.0 |
Reliability Availability Serviceability |
SVAPAR-112939 |
All |
Critical
|
A loss of disk access on one pool may cause IO to hang on a different pool due to a cache messaging hang.
(show details)
Symptom |
Loss of Access to Data |
Environment |
System with multiple storage pools. |
Trigger |
Loss of disk access to one pool. |
Workaround |
None |
|
8.6.3.0 |
Cache |
SVAPAR-115505 |
All |
Critical
|
Expanding a volume in a Flashcopy map and then creating a dependent incremental forward and reverse Flashcopy map may cause a dual node warmstart when the incremental map is started.
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using incremental reverse Flashcopy mappings. |
Trigger |
Expanding a volume in a Flashcopy map and then creating and starting a dependent incremental forward and reverse Flashcopy map. |
Workaround |
None |
|
8.6.3.0 |
FlashCopy |
SVAPAR-120391 |
All |
Critical
|
Removing an incremental Flashcopy mapping from a consistency group, after there was a previous error when starting the Flashcopy consistency group that caused a node warmstart, may trigger additional node asserts.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems using incremental copy consistency groups. |
Trigger |
Removing an incremental Flashcopy mapping from a consistency group after, there was a previous error when starting the Flashcopy consistency group that caused a node warmstart. |
Workaround |
None |
|
8.6.3.0 |
FlashCopy |
SVAPAR-120397 |
All |
Critical
|
A node may not shutdown cleanly on loss of power if it contains 25Gb Ethernet adapters, necessitating a system recovery.
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems with 25Gb Ethernet adapters. |
Trigger |
Loss of power to the system. |
Workaround |
None |
|
8.6.3.0 |
Reliability Availability Serviceability |
SVAPAR-141094 |
All |
Critical
|
On power failure, FS50xx systems with a 25Gb ROCE adapters may fail to gracefully shutdown, causing loss of cache data.
(show details)
Symptom |
Loss of Access to Data |
Environment |
FS50xx systems with a 25Gb ROCE adapters |
Trigger |
Power failure |
Workaround |
None |
|
8.6.3.0 |
Reliability Availability Serviceability |
SVAPAR-109385 |
All |
High Importance
|
When one node has a hardware fault involving a faulty PCI switch, the partner node can repeatedly assert until it enters a 564 status resulting in an outage
(show details)
Symptom |
Loss of Access to Data |
Environment |
Any Flashsystem |
Trigger |
This can occur during upgrade, however this aspect is to be confirmed |
Workaround |
Remove the failing node and then reboot the asserting node |
|
8.6.3.0 |
|
SVAPAR-111812 |
All |
High Importance
|
Systems with 8.6.0 or later software may fail to complete lsvdisk commands, if a single SSH session runs multiple lsvdisk commands piped to each other. This can lead to failed login attempts for the GUI and CLI, and is is more likely to occur if the system has more than 400 volumes.
(show details)
Symptom |
Configuration |
Environment |
Systems with 8.6.0 or later software. |
Trigger |
Unusual use of nested svcinfo commands on the CLI. |
Workaround |
Avoid nested svcinfo commands. |
|
8.6.3.0 |
Command Line Interface |
SVAPAR-112856 |
All |
High Importance
|
Conversion of Hyperswap volumes to 3 site consistency groups will increase write response time of the Hyperswap volumes.
(show details)
Symptom |
Performance |
Environment |
Any system running Hyperswap and 3-Site |
Trigger |
Conversion of Hyperswap to 3 site consistency groups |
Workaround |
Manually increase rsize of Hyperswap change volumes before conversion to 3 site consistency groups |
|
8.6.3.0 |
3-Site using HyperSwap or Metro Mirror, HyperSwap |
SVAPAR-115021 |
All |
High Importance
|
Software validation checks can trigger a T2 recovery when attempting to move a Hyperswap vdisk into and out of the nocachingiogrp state.
(show details)
Symptom |
Loss of Access to Data |
Environment |
Any system that is configured for hyperswap |
Trigger |
Invoking 'movevdisk' command with the '-nocachingiogrp' flag in a Hyperswap environment |
Workaround |
None |
|
8.6.3.0 |
HyperSwap |
SVAPAR-117457 |
All |
High Importance
|
A hung condition in Remote Receive IOs (RRI) for volume groups can lead to warmstarts on multiple nodes.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Any system that uses Policy-based Replication |
Trigger |
None |
Workaround |
None |
|
8.6.3.0 |
Policy-based Replication |
SVAPAR-117768 |
All |
High Importance
|
Cloud Callhome may stop working without logging an error
(show details)
Symptom |
Configuration |
Environment |
8.6.0 or higher Systems sending data to Storage Insights without using the data collector are most likely to hit this issue |
Trigger |
None |
Workaround |
Cloud callhome can be disabled then re-enabled to restart the callhome if it has failed. |
|
8.6.3.0 |
Call Home |
SVAPAR-120599 |
All |
High Importance
|
On systems handling a large number of concurrent host I/O requests, a timing window in memory allocation may cause a single node warmstart.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running 8.6.2.0 |
Trigger |
Very high I/O workload |
Workaround |
None |
|
8.6.3.0 |
Hosts |
SVAPAR-120616 |
All |
High Importance
|
After mapping a volume to an NVMe host, a customer is unable to map the same vdisk to a second NVMe host using the GUI, however it is possible using CLI.
(show details)
Symptom |
None |
Environment |
Any system where the same vdisks are mapped to different NVMe hosts via GUI can hit this issue. |
Trigger |
If the same vdisk is mapped to different NVMe hosts via GUI. |
Workaround |
Use the CLI |
|
8.6.3.0 |
Hosts |
SVAPAR-120630 |
All |
High Importance
|
An MDisk may go offline due to I/O timeouts caused by an imbalanced workload distribution towards the resources in DRP, whilst FlashCopy is running at a high copy rate within DRP, and the target volume is dedup.
(show details)
Symptom |
Offline Volumes |
Environment |
Any system running FlashCopy, with a deduplicated target volume in DRP. |
Trigger |
None |
Workaround |
None |
|
8.6.3.0 |
Data Reduction Pools |
SVAPAR-120631 |
All |
High Importance
|
When a user deletes a vdisk, and if 'chfcmap' is run afterwards against the same vdisk ID, a system recovery may occur.
(show details)
Symptom |
Loss of Access to Data |
Environment |
Any system configured with FlashCopy |
Trigger |
Running the 'chfcmap' command against a deleting vdisk. |
Workaround |
Do not run 'chfcmap' against a deleting vdisk ID. |
|
8.6.3.0 |
FlashCopy |
HU01222 |
All |
Suggested
|
FlashCopy entries in the eventlog always have an object ID of 0, rather then show the correct object ID
(show details)
Symptom |
None |
Environment |
Any system configured with FlashCopy groups |
Trigger |
None |
Workaround |
Use the 'Info' event nearest to the 'config' event to determine which fcgrp was stopped. |
|
8.6.3.0 |
FlashCopy |
SVAPAR-112712 |
SVC |
Suggested
|
The Cloud Call Home function will not restart on SVC clusters that were initially created with CG8 hardware and upgraded to 8.6.0.0 and above.
(show details)
Symptom |
None |
Environment |
SVC cluster that has been upgraded from CG8 hardware. |
Trigger |
Upgrading SVC cluster |
Workaround |
None |
|
8.6.3.0 |
Call Home |
SVAPAR-113792 |
All |
Suggested
|
Node assert may occur when outbound IPC message such as nslookup to a DNS server timeouts
(show details)
Symptom |
Single Node Warmstart |
Environment |
Any system running 8.6.0.x or higher |
Trigger |
None |
Workaround |
None |
|
8.6.3.0 |
|
SVAPAR-114086 |
SVC |
Suggested
|
Incorrect IO group memory policing for volume mirroring in the GUI for SVC SV3 hardware.
(show details)
Symptom |
Configuration |
Environment |
2145-SV3 hardware |
Trigger |
Attempting to increase volume mirroring memory allocation in the GUI. |
Workaround |
Perform the action via the CLI instead. |
|
8.6.3.0 |
Volume Mirroring |
SVAPAR-116265 |
All |
Suggested
|
When upgrading memory on a node, it may repeatedly reboot if not removed from the cluster before shutting the node down and adding additional memory.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
GEN3 or newer node hardware. |
Trigger |
Not first removing the node from the cluster before shutting it down and adding additional memory. |
Workaround |
Remove the node first from the cluster before shutting it down and adding additional memory. |
|
8.6.3.0 |
Reliability Availability Serviceability |
SVAPAR-117663 |
All |
Suggested
|
The last backup time for a safeguarded volume group within the Volume Groups view does not display the correct time.
(show details)
Symptom |
None |
Environment |
None |
Trigger |
None |
Workaround |
None |
|
8.6.3.0 |
Graphical User Interface |
SVAPAR-120359 |
All |
Suggested
|
Single node warmstart when using FlashCopy maps on volumes configured for Policy-based Replication
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using FlashCopy maps on volumes configured for Policy-based Replication |
Trigger |
The single node warmstart has a low risk of occurring if policy-based replication runs in cycling mode. |
Workaround |
Make volume groups with replication policies independent, or stop the partnership |
|
8.6.3.0 |
FlashCopy, Policy-based Replication |
SVAPAR-120399 |
All |
Suggested
|
A host WWPN incorrectly shows as being still logged into the storage when it is not.
(show details)
Symptom |
Configuration |
Environment |
Systems using Fibre Channel host connections. |
Trigger |
Disabling or removing a host fibre channel connection. |
Workaround |
None |
|
8.6.3.0 |
Reliability Availability Serviceability |
SVAPAR-120495 |
All |
Suggested
|
A node can experience performance degradation, if using the embedded VASA provider, thereby leading to a potential single node warmstart.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running with Embedded VASA provider. |
Trigger |
None |
Workaround |
None |
|
8.6.3.0 |
|
SVAPAR-120610 |
All |
Suggested
|
Excessive 'chfcmap' commands can result in multiple node warmstarts occurring
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Any systems configured with flashcopy. |
Trigger |
Performing excessive 'chfcmap' commands |
Workaround |
None |
|
8.6.3.0 |
FlashCopy |
SVAPAR-120639 |
All |
Suggested
|
The vulnerability scanner claims cookies were set without HttpOnly flag.
(show details)
Symptom |
Configuration |
Environment |
On port 442, the secure flag from SSL cookie is not set from SSL cookie and the HttpOnly Flag is not set from the cookie. |
Trigger |
None |
Workaround |
None |
|
8.6.3.0 |
|
SVAPAR-120732 |
All |
Suggested
|
Unable to expand vdisk from GUI as constant values for the compressed and regular pool volume disk maximum capacity were incorrect in the constant file.
(show details)
Symptom |
Configuration |
Environment |
IBM FlashSystem |
Trigger |
None |
Workaround |
Perform the action via the CLI |
|
8.6.3.0 |
Graphical User Interface |
SVAPAR-120925 |
All |
Suggested
|
A single node assert may occur due to a timing issue related to thin provisioned volumes in a traditional pool.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems with thin provisioned volumes in a traditional pool. |
Trigger |
None |
Workaround |
None |
|
8.6.3.0 |
Thin Provisioning |