SVAPAR-116592 |
All |
HIPER
|
If a V5000E or a Flashsystem 5000 is configured with multiple compressed IP partnerships, and one or more of the partnerships is with a non V5000E or FS5000, it may repeatedly warmstart due to a lack of compression resources.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
V5000E or Flashsystem 5000 configured with multiple compressed IP partnerships, and one or more of the partnerships is with a non V5000E or FS5000. Note a Flashsystem 5200 is not included in FS5000 here. |
Trigger |
Configuring a V5000E or Flashsystem 5000 with multiple compressed IP partnerships, and one or more of the partnerships is with a non V5000E or FS5000. |
Workaround |
Turn off compression for the partnership with the non V5000E or FS5000 system. |
|
8.5.0.12 |
IP Replication |
SVAPAR-132123 |
All |
HIPER
|
Vdisks can go offline after a T3 with an expanding DRAID1 array evokes some IO errors and data corruption
(show details)
Symptom |
Offline Volumes |
Environment |
Any cluster running on 8.5.0.0 or 8.6.x.x code and above. |
Trigger |
When we inject a T3 disaster recovery with an expanding DRAID1 array we get the offline vdisk and the data corrupted and/or IO errors. |
Workaround |
None |
|
8.5.0.12 |
RAID |
HU02585 |
All |
Critical
|
An unstable connection between the Storage Virtualize system and an external virtualized storage system can sometimes result in a cluster recovery occurring
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
None |
Trigger |
An unstable connection between the Storage Virtualize system and an external virtualized storage system can cause objects to be discovered out of order, resulting in a cluster recovery |
Workaround |
Stabilise the SAN fabric by replacing any failing hardware, such as a faulty SFP |
|
8.5.0.12 |
Backend Storage |
SVAPAR-115136 |
FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500, V7000 |
Critical
|
Failure of an NVMe drive has a small probability of triggering a PCIe credit timeout in a node canister, causing the node to reboot.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems with NVMe drives |
Trigger |
Drive failure |
Workaround |
None |
|
8.5.0.12 |
Drives |
SVAPAR-128052 |
All |
Critical
|
A node assert may occur if a host sends a login request to a node when the host is being removed from the cluster with the '-force' parameter.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Any host that uses the NVME protocol |
Trigger |
Removing a node by using the '-force' parameter |
Workaround |
Do not use the '-force' parameter |
|
8.5.0.12 |
Hosts, NVMe |
SVAPAR-128379 |
All |
Critical
|
When collecting the debug data from a 16Gb or 32Gb Fibre Channel adapter, node warmstarts may occur, due to the firmware dump file exceeding the maximum size.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Can occur on any system configured with either 16Gb or 32Gb Fibre Channel adapters |
Trigger |
Collecting 16Gb or 32Gb FC adapter livedump data |
Workaround |
None |
|
8.5.0.12 |
Reliability Availability Serviceability |
SVAPAR-88887 |
FS9100, FS9200, FS9500 |
Critical
|
Loss of access to data after replacing all boot drives in system
(show details)
Symptom |
Loss of Access to Data |
Environment |
Any FS9xxx cluster |
Trigger |
On a FS9xxx system, if the canisters are swapped, and at some point later, all the boot drives in a canister are replaced |
Workaround |
None |
|
8.5.0.12 |
Drives, Reliability Availability Serviceability |
HU02219 |
All |
High Importance
|
Certain tier 1 flash drives report 'SCSI check condition: Aborted command' events
(show details)
Symptom |
None |
Environment |
Drives with firmware levels starting 'MS' |
Trigger |
None |
Workaround |
None |
|
8.5.0.12 |
Drives |
SVAPAR-104250 |
All |
High Importance
|
There is an issue whereby NVMe CaW (Compare and Write) commands can incorrectly go into an invalid state, thereby causing the node to assert to clear the bad condition
(show details)
Symptom |
Single Node Warmstart |
Environment |
Any host running NVMe |
Trigger |
None |
Workaround |
None |
|
8.5.0.12 |
Hosts, NVMe |
SVAPAR-108715 |
All |
High Importance
|
The Service Assistant GUI on 8.5.0.0 and above incorrectly performs actions on the local node instead of the node selected in the GUI.
(show details)
Symptom |
Configuration |
Environment |
None |
Trigger |
None |
Workaround |
The work around is to perform Service Assistant GUI actions using the Service Assistant CLI instead. Alternatively, In the Service Assistant GUI, select a node that you are not on to perform a Service Assistant action, then when you submit the command, the action will be performed on the local node instead. |
|
8.5.0.12 |
Graphical User Interface |
SVAPAR-110765 |
All |
High Importance
|
In a 3-Site configuration, the Config node can be lost if the 'stopfcmap or 'stopfcconsistgrp ' commands are run with the '-force' parameter
(show details)
Symptom |
Single Node Warmstart |
Environment |
Any cluster with 3-Site configuration |
Trigger |
Running 'stopfcmap' or 'stopfcconsistgrp' with '-force' on the flash copy maps with vdisk in 3-site configuration. |
Workaround |
Do not use the '-force' parameter when running either the 'stopfcmap' or 'stopfcconsistgrp' commands. |
|
8.5.0.12 |
3-Site using HyperSwap or Metro Mirror |
SVAPAR-111996 |
FS9500, SVC |
High Importance
|
After upgrading to a level which contains new battery firmware, the battery may be offline after the upgrade.
(show details)
Symptom |
Loss of Redundancy |
Environment |
FS9500 or SV3 only |
Trigger |
Upgrading to 8.5.0.8+ or 8.6.0.0+ |
Workaround |
We have a utility to allow us to fix this in the field. |
|
8.5.0.12 |
Reliability Availability Serviceability, System Update |
SVAPAR-127063 |
All |
High Importance
|
Degraded Remote Copy performance on systems with multiple IO groups running 8.5.0.11 or 8.6.0.3 after a node restarts
(show details)
Symptom |
Performance |
Environment |
Systems with multiple IO groups using Remote Copy |
Trigger |
Restarting a node |
Workaround |
Warmstart any node that is affected by the issue |
|
8.5.0.12 |
Global Mirror, Global Mirror With Change Volumes, HyperSwap, Metro Mirror, Performance |
SVAPAR-127841 |
All |
High Importance
|
A slow I/O resource leak may occur when using FlashCopy, and the system is under high workload. This may cause a node warmstart to occur
(show details)
Symptom |
Single Node Warmstart |
Environment |
Any system configured with FlashCopy |
Trigger |
Many FlashCopy activities occurring when the system is experiencing high workloads |
Workaround |
None |
|
8.5.0.12 |
FlashCopy |
SVAPAR-128228 |
All |
High Importance
|
The NTP daemon may not synchronise after upgrading from 8.3.x to 8.5.x
(show details)
Symptom |
None |
Environment |
Any system that has upgraded from 8.3.x to 8.5.x |
Trigger |
Upgrading from 8.3.x to 8.5.x |
Workaround |
None |
|
8.5.0.12 |
|
SVAPAR-93054 |
All |
High Importance
|
Backend systems on 8.2.1 and beyond have an issue that causes capacity information updates to stop after a T2 or T3 is performed. This affects all backend systems with FCM arrays
(show details)
Symptom |
Configuration |
Environment |
Backend system with FCM arrays on 8.2.1 and beyond |
Trigger |
Systems that have performed either a T2 or T3 recovery |
Workaround |
Upgrade the backend system as the upgrade restarts the capacity update process |
|
8.5.0.12 |
Backend Storage |
SVAPAR-93309 |
All |
High Importance
|
A node may briefly go offline after a battery firmware update
(show details)
Symptom |
Single Node Warmstart |
Environment |
All Storage Virtualize based systems |
Trigger |
Battery firmware update |
Workaround |
None |
|
8.5.0.12 |
System Update |
SVAPAR-99537 |
All |
High Importance
|
If a hyperswap volume copy is created in a DRP child pool, and the parent pool has FCM storage, the change volumes will be created as thin-provisioned instead of compressed
(show details)
Symptom |
Configuration |
Environment |
Systems with DRP child pools and FCM storage |
Trigger |
Creating a change volume in a DRP child pool when the parent pool contains FCMs |
Workaround |
None |
|
8.5.0.12 |
Data Reduction Pools |
HU02462 |
All |
Suggested
|
A node can warm start when a FlashCopy volume is flushing, quiesces and has pinned data
(show details)
Symptom |
Single Node Warmstart |
Environment |
Any system configured to use FlashCopy |
Trigger |
None |
Workaround |
None |
|
8.5.0.12 |
FlashCopy |
HU02591 |
All |
Suggested
|
Multiple node asserts can occur when running commands with the 'preferred node' filter during an upgrade to 8.5.0.0 and above.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
None |
Trigger |
Running commands with the 'preferred node' filter during an upgrade from pre 8.5 release to 8.5 release. |
Workaround |
Avoid using commands with the 'preferred node' filter during the upgrade. |
|
8.5.0.12 |
Inter-node messaging |
SVAPAR-111021 |
All |
Suggested
|
Unable to load resource page in GUI if the IO group ID:0 does not have any nodes.
(show details)
Symptom |
None |
Environment |
Any systems that have no nodes in IO group ID:0. |
Trigger |
None |
Workaround |
None |
|
8.5.0.12 |
System Monitoring |
SVAPAR-120399 |
All |
Suggested
|
A host WWPN incorrectly shows as being still logged into the storage when it is not.
(show details)
Symptom |
Configuration |
Environment |
Systems using Fibre Channel host connections. |
Trigger |
Disabling or removing a host fibre channel connection. |
Workaround |
None |
|
8.5.0.12 |
Reliability Availability Serviceability |
SVAPAR-120610 |
All |
Suggested
|
Excessive 'chfcmap' commands can result in multiple node warmstarts occurring
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Any systems configured with flashcopy. |
Trigger |
Performing excessive 'chfcmap' commands |
Workaround |
None |
|
8.5.0.12 |
FlashCopy |
SVAPAR-120639 |
All |
Suggested
|
The vulnerability scanner claims cookies were set without HttpOnly flag.
(show details)
Symptom |
Configuration |
Environment |
On port 442, the secure flag from SSL cookie is not set from SSL cookie and the HttpOnly Flag is not set from the cookie. |
Trigger |
None |
Workaround |
None |
|
8.5.0.12 |
|
SVAPAR-122411 |
All |
Suggested
|
A node may assert when a vdisk has been expanded and rehome has not made aware of the possible change of the number of regions it may have to rehome.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Any system configure with Data Reduction Pools |
Trigger |
Any command that will expand the size of a vdisk such as 'expandvdisksize'. |
Workaround |
None |
|
8.5.0.12 |
Data Reduction Pools |
SVAPAR-123644 |
All |
Suggested
|
A system with NVMe drives may falsely log an error indicating a Flash drive has high write endurance usage. The error cannot be cleared.
(show details)
Symptom |
Configuration |
Environment |
Systems with NVMe drives |
Trigger |
None |
Workaround |
None |
|
8.5.0.12 |
Reliability Availability Serviceability |
SVAPAR-126742 |
All |
Suggested
|
A 3400 error (too many compression errors) may be logged incorrectly, due to an incorrect threshold. The error can be ignored on code levels which do not contain this fix.
(show details)
Symptom |
Configuration |
Environment |
Systems using DRP compression |
Trigger |
None |
Workaround |
None |
|
8.5.0.12 |
Compression, Data Reduction Pools |
SVAPAR-127908 |
All |
Suggested
|
A volume mapped to a NVMe host cannot be mapped to another NVMe host via the GUI, however it is possible via the CLI. In addition, when a host is removed from a host cluster, it is not possible to add it back using the GUI
(show details)
Symptom |
None |
Environment |
Any volume mapped to an NVMe hosts |
Trigger |
Any volume that has been mapped to an NVMe hosts, or any host that is deleted from a host cluster, and then added back to same host cluster using the GUI. |
Workaround |
None |
|
8.5.0.12 |
GUI Fix Procedure, Graphical User Interface, Host Cluster, Hosts, NVMe |
SVAPAR-85640 |
FS5000, FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500 |
Suggested
|
If new nodes/iogroups are added to an SVC cluster that is virtualizing a clustered SpecV system, an attempt to add the SVC node host objects to a host cluster on the backend SpecV system will fail with CLI error code CMMVC8278E due to incorrect policing
(show details)
Symptom |
Configuration |
Environment |
Clustered SpecV system virtualized behind an SVC cluster or other SpecV cluster |
Trigger |
Running the 'addhostclustermember' command |
Workaround |
None |
|
8.5.0.12 |
Host Cluster |
SVAPAR-85658 |
All |
Suggested
|
When replacing a boot drive, the new drive needs to be synchronized with the existing drive. The command to do this appears to run and does not return an error, but the new drive does not actually get synchronized.
(show details)
Symptom |
Loss of Redundancy |
Environment |
Flashsystem with dual boot drives in the node canister |
Trigger |
Replacing one of the boot drives. |
Workaround |
Try running the DMP from the GUI, |
|
8.5.0.12 |
Reliability Availability Serviceability |
SVAPAR-98611 |
All |
Suggested
|
The system returns an incorrect retry delay timer for a SCSI BUSY status response to AIX hosts when an attempt is made to access a VDisk that is not mapped to the host
(show details)
Symptom |
Loss of Access to Data |
Environment |
AIX hosts |
Trigger |
Trying to access an unmapped VDisk from an AIX host |
Workaround |
None |
|
8.5.0.12 |
Interoperability |
SVAPAR-107547 |
All |
Critical
|
If there are more than 64 logins to a single Fibre Channel port, and a switch zoning change is made, a single node warmstart may occur.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems with Fibre Channel adapters |
Trigger |
Switch zoning change with more than 64 logins to a single storage system port. |
Workaround |
Reduce the number of logins to a single storage system port |
|
8.5.0.11 |
Reliability Availability Serviceability |
SVAPAR-107734 |
All |
Critical
|
When issuing IO to an incremental fcmap volume that is in a stopped state, but has recently been expanded, and it also has a partner fcmap, may cause the nodes to go into a restart.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems configured with Incremental flashcopy and reverse fcmap |
Trigger |
Resizing volumes in an incremental partnered fcmaps |
Workaround |
Ensure that both incremental partnered fcmaps are deleted, and then re-create a new pair if you need to resize the volumes. |
|
8.5.0.11 |
FlashCopy |
SVAPAR-112107 |
FS9500, SVC |
Critical
|
There is an issue that affects PSU firmware upgrades in FS9500 and SV3 systems that can cause an outage. This happens when one PSU fails to download the firmware and another PSU starts to download the firmware. It is a very rare timing window that can be triggered if two PSUs are reseated close in time during the firmware upgrade process.
(show details)
Symptom |
Loss of Access to Data |
Environment |
FS9500 or SV3 |
Trigger |
Two PSUs are reseated close in time during the firmware upgrade process |
Workaround |
None |
|
8.5.0.11 |
System Update |
SVAPAR-112707 |
SVC |
Critical
|
Marking error 3015 as fixed on a SVC cluster containing SV3 nodes may cause a loss of access to data. For more details refer to this Flash
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems containing 214x-SV3 nodes that have been downgraded from 8.6 to 8.5 |
Trigger |
Marking 3015 error as fixed |
Workaround |
Do not attempt to repair the 3015 error, contact IBM support |
|
8.5.0.11 |
Reliability Availability Serviceability |
SVAPAR-110234 |
FS5000, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500, SVC |
High Importance
|
A single node warmstart can occur due to fibre channel adapter resource contention during 'chpartnership -stop' or 'mkfcpartnership' actions
(show details)
Symptom |
Single Node Warmstart |
Environment |
None |
Trigger |
'chpartnership -stop', mkfcpartnership |
Workaround |
None |
|
8.5.0.11 |
|
SVAPAR-112525 |
All |
High Importance
|
A node assert can occur due to a resource allocation issue in a small timing window when using Remote Copy
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Hyperswap, Metro Mirror, Global Mirror, or Global Mirror with Change Volumes |
Trigger |
None |
Workaround |
None |
|
8.5.0.11 |
Global Mirror, Global Mirror With Change Volumes, HyperSwap, Metro Mirror |
SVAPAR-117318 |
All |
High Importance
|
A faulty SFP in a 32Gb Fibre Channel adapter may cause a single node warmstart, instead of reporting the port as failed.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems with 32Gb Fibre Channel adapters |
Trigger |
Faulty SFP |
Workaround |
None |
|
8.5.0.11 |
Reliability Availability Serviceability |
SVAPAR-108551 |
All |
Suggested
|
An expired token in the GUI file upload process can cause the upgrade to not start automatically after the file is successfully uploaded.
(show details)
Symptom |
Configuration |
Environment |
None |
Trigger |
Start a code upgrade via GUI |
Workaround |
Upgrade via the CLI. The other option is to log out from the GUI, then log back in to re-authenticate, then go back to the Update System view. On the Test and Update modal, select the test utility and update image files that are already on the system from the previous upload (without selecting to upload them again) |
|
8.5.0.11 |
System Update |
SVAPAR-112711 |
All |
Suggested
|
IBM Storage Virtualize user interface code will not respond to a malformed HTTP POST with expected HTTP 401 message.
(show details)
Symptom |
None |
Environment |
IBM Storage Virtualize GUI |
Trigger |
Malformed HTTP POST |
Workaround |
None |
|
8.5.0.11 |
Graphical User Interface |
SVAPAR-117179 |
All |
Suggested
|
Snap data collection does not collect an error log if the superuser password requires a change
(show details)
Symptom |
None |
Environment |
None |
Trigger |
Collecting a snap after the superuser password has expired. |
Workaround |
Change the superuser password |
|
8.5.0.11 |
Support Data Collection |
SVAPAR-100127 |
All |
Critical
|
The Service Assistant GUI Node rescue option incorrectly performs the node rescue on the local node instead of the node selected in the GUI.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Any cluster running on 8.5.0.0 code and above. |
Trigger |
This problem can happen if the user is on the Service Assistant GUI of a node but selects another node for node rescue. The Node rescue will perform on the local node they are on and not the node selected |
Workaround |
Use the CLI 'satask rescuenode -force <node-panel-id>' command to select the correct node to perform the node rescue on, or log onto the Service GUI of the node that is requiring a node rescue if it is accessible, that way the node in need will be the local node |
|
8.5.0.10 |
Graphical User Interface |
SVAPAR-104533 |
All |
Critical
|
Systems that encounter multiple node asserts, followed by a system T3 recovery, may experience errors repairing Data Reduction Pools
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using Data Reduction Pools |
Trigger |
Multiple node asserts followed by a system T3 recovery |
Workaround |
None |
|
8.5.0.10 |
Data Reduction Pools |
SVAPAR-91860 |
All |
Critical
|
If an upgrade is started with the pause flag and then aborted, the pause flag may not be cleared. This can trigger the system to encounter an unexpected code path on the next upgrade, thereby causing a loss of access to data
(show details)
Symptom |
Loss of Access to Data |
Environment |
All SpecV Systems |
Trigger |
Starting an upgrade with the pause flag and then aborting it |
Workaround |
None |
|
8.5.0.10 |
System Update |
HU02539 |
All |
High Importance
|
If an IP address is moved to a different port on a node, the old routing table entries do not get refreshed. Therefore, the IP address maybe inaccessible through the new port
(show details)
Symptom |
None |
Environment |
All |
Trigger |
Moving an IP address to a different port on a node |
Workaround |
Either reboot the node or assign an IP address that has not been used on the node since it was last rebooted |
|
8.5.0.10 |
|
HU02573 |
All |
High Importance
|
HBA firmware can cause a port to appear to be flapping. The port will not work again until the HBA is restarted by rebooting the node.
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems with Fibre Channel adapters |
Trigger |
Effects systems with high utilization, and possibly bursty IO |
Workaround |
Rebooting the node will reset the buffer, thereby allowing the port to login again |
|
8.5.0.10 |
Fibre Channel, Hosts |
SVAPAR-100162 |
All |
High Importance
|
Some host vendors such as Windows have recently started to use 'mode select page 7'. IBM Storage does not support this mode. If the storage received this mode level it would cause a warmstart to occur
(show details)
Symptom |
Single Node Warmstart |
Environment |
Any cluster running on 8.4.0.0 or higher |
Trigger |
If a host uses 'mode select page 7' |
Workaround |
None |
|
8.5.0.10 |
Hosts |
SVAPAR-100977 |
All |
High Importance
|
When a zone containing NVMe devices is enabled, a node warmstart might occur.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Any system running 8.5.0.5 |
Trigger |
Enabling a zone with a host that has approximately 1,000 vdisks mapped |
Workaround |
Make sure that the created zone does not contain NVMe devices |
|
8.5.0.10 |
NVMe |
SVAPAR-105727 |
All |
High Importance
|
An upgrade within the 8.5.0 release stream from 8.5.0.5 or below, to 8.5.0.6 or above, can cause an assert of down-level nodes during the upgrade, if volume mirroring is heavily utilised
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Any system running volume mirroring with either a large number of volumes or high syncrate |
Trigger |
Upgrading from 8.5.0.5 or below to 8.5.0.6 or above with heavy volume mirroring workload |
Workaround |
Disable mirroring, or reduce syncrate to a low value during the upgrade process |
|
8.5.0.10 |
Volume Mirroring |
SVAPAR-94686 |
All |
High Importance
|
The GUI can become slow and unresponsive due to a steady stream of configuration updates such as 'svcinfo' queries for the latest configuration data
(show details)
Symptom |
Configuration |
Environment |
None |
Trigger |
None |
Workaround |
None |
|
8.5.0.10 |
Graphical User Interface |
SVAPAR-99175 |
All |
High Importance
|
A node may warmstart due to an invalid queuing mechanism in cache. This can cause IO in cache to be in the same processing queue more than once.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Any cluster on code below 8.6.1 |
Trigger |
Can happen when IO in cache is being processed |
Workaround |
None |
|
8.5.0.10 |
Cache |
SVAPAR-99273 |
All |
High Importance
|
If a SAN switch's Fabric Controller issues an abort (ABTS) command, and then issues an RSCN command before the abort has completed, this unexpected switch behaviour can trigger a single-node warmstart.
(show details)
Symptom |
Single Node Warmstart |
Environment |
None |
Trigger |
An unexpected sequence of Fibre Channel frames received from Fabric Controller |
Workaround |
None |
|
8.5.0.10 |
|
HU02456 |
FS5100, FS5200, FS7200, FS9200, V7000 |
Suggested
|
Unseating a NVMe drive after automanage failure can cause a node to warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running 8.1.x onwards |
Trigger |
Unseating of an NVMe drive after automanage failure |
Workaround |
None as this issue is difficult to encounter |
|
8.5.0.10 |
Drives |
SVAPAR-100958 |
All |
Suggested
|
A single FCM may incorrectly report multiple medium errors for the same LBA
(show details)
Symptom |
Performance |
Environment |
Predominantly FCM2, but could also affect other FCM generations |
Trigger |
None |
Workaround |
After the problem is detected, manually fail the FCM, format it and then insert back into the array. After the copyback has completed ensure to update all FCMs to the recommended firmware level |
|
8.5.0.10 |
RAID |
SVAPAR-107595 |
FS7300, FS9100, FS9200, FS9500, SVC |
Suggested
|
Improve maximum throughput for Global Mirror, Metro Mirror and Hyperswap by providing more inter-node messaging resources
(show details)
Symptom |
Performance |
Environment |
Systems running Global Mirror, Metro Mirror or Hyperswap |
Trigger |
High Global Mirror, Metro Mirror or Hyperswap workload |
Workaround |
None |
|
8.5.0.10 |
Global Mirror, HyperSwap, Metro Mirror, Performance |
SVAPAR-109289 |
All |
Suggested
|
Buffer overflow may occur when handling the maximum length of 55 characters for either Multi-Factor Authentication (MFA) or Single Sign On (SSO) client secrets
(show details)
Symptom |
Configuration |
Environment |
Systems that use MFA or SSO |
Trigger |
Using a client secret with > 55 characters |
Workaround |
Use less then 55 characters for the client secret |
|
8.5.0.10 |
Backend Storage |
SVAPAR-98576 |
All |
Suggested
|
Customers cannot edit certain properties of a flashcopy mapping via the GUI flashcopy mappings panel as the edit modal does not appear.
(show details)
Symptom |
Configuration |
Environment |
None |
Trigger |
None |
Workaround |
Use the CLI instead |
|
8.5.0.10 |
FlashCopy, Graphical User Interface |
SVAPAR-94179 |
FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500, V7000 |
HIPER
|
Faulty hardware within or connected to the CPU can result in a reboot on the affected node. However it is possible for this to sometimes result in a reboot on the partner node
(show details)
Symptom |
Loss of Access to Data |
Environment |
All Flashsystems and V7000 Gen3, but not SVC |
Trigger |
Node hardware fault |
Workaround |
None |
|
8.5.0.9 |
Reliability Availability Serviceability |
SVAPAR-98567 |
FS5000 |
HIPER
|
In FS50xx nodes, the TPM may become unresponsive after a number of weeks' runtime. This can lead to encryption or mdisk group CLI commands failing, or in some cases node warmstarts. This issue was partially addressed by SVAPAR-83290, but is fully resolved by this second fix.
(show details)
Symptom |
Loss of Access to Data |
Environment |
FS50xx platforms running either V8.4.0,V8.4.1,V8.4.2,V8.5.0,V8.5.1,V8.5.2,V8.5.3 |
Trigger |
Enabling encryption, or creating an encrypted pool |
Workaround |
Reboot each node in turn. Wait 30 minutes between the two nodes in an I/O group, to allow hosts to failover. Check there are no volumes dependent on the second node before proceeding with the reboot. After all nodes have been rebooted, retry the configuration action, which should now complete successfully |
|
8.5.0.9 |
Encryption |
SVAPAR-98672 |
All |
Critical
|
VMWare host crashes on servers connected using NVMe over Fibre Channel with the host_unmap setting disabled
(show details)
Symptom |
Loss of Access to Data |
Environment |
ESXi hosts connected to a system using NVME over FC protocol |
Trigger |
VM is sending unmap command with type deallocate |
Workaround |
Enabling host unmap will resolve the VMWare instability. However enabling unmap can cause performance issues for systems with enterprise or nearline drives |
|
8.5.0.9 |
NVMe |
SVAPAR-98971 |
All |
Suggested
|
The GUI may show repeated invalid pop-ups stating configuration node failover has occurred
(show details)
Symptom |
Configuration |
Environment |
None |
Trigger |
None |
Workaround |
Clearing browser cookies will resolve the issue (specifically the configNodeWWNN cookie) |
|
8.5.0.9 |
Graphical User Interface |
SVAPAR-89694 |
All |
HIPER
|
Kernel panics might occur on a subset of Spectrum Virtualize Hardware Platforms with a 10G Ethernet adapter running 8.4.0.10, 8.5.0.7 and 8.5.3.1 when taking a snap. For more details refer to this Flash
(show details)
Symptom |
Loss of Access to Data |
Environment |
V7000 Gen2 & Gen2+, SVC nodes (DH8 & SV1) and FS50xx systems with 10G Ethernet adapters are affected. Nodes with different node hardware, e.g. FS5200 are not affected even if they have a 10G Ethernet adapter installed |
Trigger |
Taking a snap, livedump or when a node warmstarts |
Workaround |
Do not take snaps or livedumps |
|
8.5.0.8 |
|
HU02586 |
All |
Critical
|
When deleting a safeguarded copy volume which is related to a restore operation and another related volume is offline, the system may warmstart repeatedly
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running safeguarded copy |
Trigger |
Deletion of a safeguarded volume while a restore is in operation |
Workaround |
Return any related offline volumes online |
|
8.5.0.8 |
Safeguarded Copy & Safeguarded Snapshots |
SVAPAR-84116 |
All |
Critical
|
The background delete processing for deduplicated volumes might not operate correctly if the preferred node for a deduplicated volume is changed while a delete is in progress. This can result in data loss which will be detected by the cluster when the data is next accessed
(show details)
Symptom |
Data Integrity Loss |
Environment |
Systems running with deduplicated volumes on code levels up to 8.4.0.10, 8.5.0.0 through 8.5.0.7 and 8.5.1 through 8.5.4 are vulnerable to APAR SVAPAR-84116. The fix is available in 8.4.0.11 and 8.5.0.8 |
Trigger |
The background delete processing for deduplicated volumes might not operate correctly if the preferred node for a deduplicated volume is changed while a delete of another volume is in progress. This can result in data loss which will be detected by the cluster when the data is next accessed |
Workaround |
Commands that change the preferred node of a deduplicated volume should not be run while another deduplicated volume is in a deleting state. These commands are: splitvdiskcopy and movevdisk |
|
8.5.0.8 |
Data Reduction Pools, Deduplication |
SVAPAR-87729 |
All |
Critical
|
After a system has logged '3201 : Unable to send to the cloud callhome servers', the system may end up with an inconsistency in the Event Log. This inconsistency can cause a number of symptoms, including node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Any system running V8.4.1,V8.4.2,V8.5.0,V8.5.1,V8.5.2,V8.5.3 |
Trigger |
Cloud callhome errors |
Workaround |
None |
|
8.5.0.8 |
Call Home |
SVAPAR-89692 |
FS9500, SVC |
Critical
|
Battery back-up units may reach end of life prematurely on FS9500 / SV3 systems, despite the batteries being in good physical health, which will result in node errors and potentially nodes going offline if both batteries are affected
(show details)
Symptom |
Loss of Redundancy |
Environment |
FS9500 and SV3 systems are exposed |
Trigger |
There is no trigger. The issue has an increasing likelihood to occur after the batteries are 4-6 months old |
Workaround |
Contact IBM Support to obtain a utility to upgrade the battery firmware |
|
8.5.0.8 |
|
SVAPAR-90438 |
All |
Critical
|
A conflict of host IO on one node, with array resynchronisation task on the partner node, can result in some regions of parity inconsistency. This is due to the asynchronous parity update behaviour leaving invalid parity in the RAID internal cache
(show details)
Symptom |
None |
Environment |
Anything |
Trigger |
This issue can only occur after the RAID array has undergone a re-initialization procedure (such as after a Tier3 recovery) |
Workaround |
None |
|
8.5.0.8 |
Distributed RAID |
HU02565 |
All |
High Importance
|
Node warmstart when generating data compression savings data for 'lsvdiskanalysis'
(show details)
Symptom |
Single Node Warmstart |
Environment |
All |
Trigger |
None |
Workaround |
None |
|
8.5.0.8 |
|
SVAPAR-82950 |
FS9500, SVC |
High Importance
|
If a FlashSystem 9500 or SV3 node had a USB Flash Drive present at boot, upgrading to either 8.5.0.7 or 8.5.3.0 may cause the node to become unresponsive. Systems already running 8.5.0.7 or 8.5.3.0 are not affected by this issue
(show details)
Symptom |
Loss of Redundancy |
Environment |
FlashSystem 9500 or SV3 node with a USB Flash Drive present |
Trigger |
Upgrade to 8.5.3.0 |
Workaround |
None |
|
8.5.0.8 |
Reliability Availability Serviceability |
SVAPAR-85980 |
All |
High Importance
|
iSCSI response times may increase on some systems with 25Gb ethernet adapters, after upgrade to 8.4.0.9 or 8.5.x
(show details)
Symptom |
Performance |
Environment |
Any platform running V8.4 or V8.5 |
Trigger |
None |
Workaround |
None |
|
8.5.0.8 |
Performance, System Update |
SVAPAR-90395 |
FS9500, SVC |
High Importance
|
FS9500 and SV3 might suffer from poor Remote Copy performance due to a lack of internal messaging resources
(show details)
Symptom |
Performance |
Environment |
FS9500 or SV3 systems with Remote Copy, typically running Hyperswap, Metro Mirror, Global Mirror and GMCV |
Trigger |
Not enough resources available for Remote Copy |
Workaround |
None |
|
8.5.0.8 |
Global Mirror, Global Mirror With Change Volumes, HyperSwap, Metro Mirror |
HU02594 |
All |
Suggested
|
Initiating drive firmware update via management user interface for one drive class can prompt all drives to be updated
(show details)
Symptom |
None |
Environment |
Any system running V8.4.2,V8.5.0,V8.5.1,V8.5.2,V8.5.3 |
Trigger |
None |
Workaround |
None |
|
8.5.0.8 |
Drives, System Update |
SVAPAR-89296 |
All |
Suggested
|
Immediately after upgrade from pre-8.4.0 to 8.4.0 or later, EasyTier may stop promoting hot data to the tier0_flash tier if it contains non-FCM storage. This issue will automatically resolve on the next upgrade
(show details)
Symptom |
Performance |
Environment |
Multi-tier pools where tier0_flash contains non-FCM storage |
Trigger |
Upgrading from pre-8.4.0 to 8.4.0 |
Workaround |
Upgrade to any later version of software, or warmstart the config node |
|
8.5.0.8 |
EasyTier |
HU02572 |
All |
HIPER
|
When controllers running specified code levels with SAS storage are power cycled or rebooted, there is a chance that 56 bytes of data will be incorrectly restored into the cache, leading to undetected data corruption. The system will attempt to flush the cache before an upgrade, so this defect is less likely during an upgrade.
(show details)
Symptom |
Data Integrity Loss |
Environment |
Controllers running 8.5.0.0 through 8.5.0.6, 8.5.1, 8.5.2 and 8.5.3 must have SAS storage to be vulnerable to this defect. |
Trigger |
A power cycle or node reboot while the cache is not empty can trigger this defect |
Workaround |
None |
|
8.5.0.7 |
Drives |
HU01782 |
All |
High Importance
|
A node warmstart may occur due to a potentially bad SAS hardware component on the system such as a SAS cable, SAS expander or SAS HIC
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems that have a faulty SAS hardware component |
Trigger |
Faulty SAS component |
Workaround |
None |
|
8.5.0.7 |
Drives |
HU02555 |
All |
High Importance
|
A node may warmstart if the system is configured for remote authorization, but no remote authorization service, such as LDAP, has been configured
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running 8.5.x onwards |
Trigger |
Attempted login by a user that is not a locally configured user such as LDAP |
Workaround |
Log in to the node with a known local user (i.e. superuser) then run 'svctask chauthservice -enable no -type ldap' to correct the auth inconsistency |
|
8.5.0.7 |
LDAP |
HU02557 |
All |
High Importance
|
Systems may be unable to upgrade from pre-8.5.0 to 8.5.0 due to a previous node upgrade and certain DRP conditions existing
(show details)
Symptom |
Single Node Warmstart |
Environment |
A memory upgrade or node upgrade happened in the past, and DRP was in use at the point of the memory upgrade, and the DRP did not have any dedup-enabled volumes at the point of that upgrade |
Trigger |
Certain DRP conditions, and a memory upgrade, and node upgrade has previously happened in the past |
Workaround |
None |
|
8.5.0.7 |
Data Reduction Pools, System Update |
SVAPAR-83290 |
FS5000 |
High Importance
|
An issue with the Trusted Platform Module (TPM) in FlashSystem 50xx nodes may cause the TPM to become unresponsive. This can happen after a number of weeks of continuous runtime.
(show details)
Symptom |
Single Node Warmstart |
Environment |
FS50xx platforms running either V8.4.0,V8.4.1,V8.4.2,V8.5.0,V8.5.1,V8.5.2,V8.5.3 |
Trigger |
Unresponsive TPM |
Workaround |
Reboot each node in turn. Wait 30 minutes between the two nodes in an I/O group, to allow hosts to failover. Check there are no volumes dependent on the second node before proceeding with the reboot. After all nodes have been rebooted, retry the configuration action, which should now complete successfully. |
|
8.5.0.7 |
|
SVAPAR-84305 |
All |
High Importance
|
A node may warmstart when attempting to run 'chsnmpserver -community' command without any additional parameter
(show details)
Symptom |
Loss of Access to Data |
Environment |
Any platform running V8.4.0 |
Trigger |
None |
Workaround |
Use an additional parameter with the 'chsnmpserver -community' command |
|
8.5.0.7 |
System Monitoring |
SVAPAR-84331 |
All |
High Importance
|
A node may warmstart when the 'lsnvmefabric -remotenqn' command is run
(show details)
Symptom |
Single Node Warmstart |
Environment |
Any system running NVMe |
Trigger |
The warmstart can occur typically when the 'lsnvmefabric -remotenqn' command is run by a script or orchestration layer such as Redhat Openshift or Kubernetes, combined with the IBM CSI driver. |
Workaround |
None |
|
8.5.0.7 |
NVMe |
SVAPAR-85396 |
FS5000, FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500 |
High Importance
|
Replacement Samsung NVME drives may show as unsupported, or they may fail during a firmware upgrade as unsupported, due to a VPD read problem
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running V8.5.3 with industry standard NVMe drives |
Trigger |
Drive firmware update or drive replacements |
Workaround |
Manually power cycling the slot of the failed drive often helps |
|
8.5.0.7 |
Drives |
SVAPAR-86035 |
All |
High Importance
|
Whilst completing a request, a DRP pool attempts to allocate additional metadata space, but there is no free space available. This causes the node to warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Any DRP pool that has run out of metadata space |
Trigger |
Not enough metadata space available |
Workaround |
Add additional space to the DRP pool |
|
8.5.0.7 |
Data Reduction Pools |
HU02553 |
FS9500, SVC |
Suggested
|
Remote copy relationships may not correctly display the name of the vdisk on the remote cluster
(show details)
Symptom |
None |
Environment |
Any FS9500 or SV3 node running 8.4.2 or later, and using remote copy |
Trigger |
None |
Workaround |
None |
|
8.5.0.7 |
Global Mirror, Global Mirror With Change Volumes, Metro Mirror |
HU02579 |
All |
Suggested
|
The GUI 'Add External iSCSI Storage' wizard does not work with portsets. The ports are shown but are not selectable
(show details)
Symptom |
None |
Environment |
Any system running V8.4.2,V8.5.0,V8.5.1,V8.5.2, and using IP portsets |
Trigger |
None |
Workaround |
Use the command line to configure the external iSCSI connection |
|
8.5.0.7 |
Graphical User Interface, iSCSI |
SVAPAR-84099 |
All |
Suggested
|
An NVMe codepath exists where by strict state checking incorrectly decides that a software flag state is invalid, there by triggering a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems with NVMe hosts |
Trigger |
None |
Workaround |
None |
|
8.5.0.7 |
Hosts, NVMe |
HU02475 |
All |
HIPER
|
Power outage can cause reboots on nodes with 25Gb ethernet adapters, necessitating T3 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
Any node that has a 25Gb ethernet adapter installed |
Trigger |
Power outage occurs, causing both nodes to experience a kernel panic, meaning cluster information is lost |
Workaround |
T3 recovery will be required |
|
8.5.0.6 |
Reliability Availability Serviceability |
HU02420 |
All |
Critical
|
During an array copyback it is possible for a memory leak to result in the progress stalling and an node warmstart of all nodes resulting in a temporary loss of access
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running < 8.4 |
Trigger |
Memory leak that causes resources used for the copyback to become depleted |
Workaround |
None |
|
8.5.0.6 |
RAID |
HU02449 |
All |
Critical
|
Due to a timing issue, it is possible (but very unlikely) that maintenance on a SAS 92F/92G expansion enclosure could cause multiple node warmstarts, leading to a loss of access
(show details)
Symptom |
Loss of Access to Data |
Environment |
Any system with a 92F/92G expansion enclosure |
Trigger |
Removal of a SAS enclosure |
Workaround |
None |
|
8.5.0.6 |
Backend Storage |
HU02513 |
All |
Critical
|
When upgrading one side of a cluster from 8.4.2 to either 8.5.0 or 8.5.2, when the other side of the cluster is still running 8.4.2, when you run either 'mkippartnership' or 'rmippartnership' commands from the cluster that is running 8.5.0 or 8.5.2, then an iplink node warmstart can occur
(show details)
Symptom |
Single Node Warmstart |
Environment |
Nodes with active bridgework connections, will warmstart due to this defect |
Trigger |
One side of the cluster is on 8.4.2 and the other side is on either 8.5.0 or 8.5.2 and partnerships are added or removed |
Workaround |
Upgrade all sides of the cluster to 8.5.2 |
|
8.5.0.6 |
3-Site using HyperSwap or Metro Mirror, Global Mirror, Global Mirror With Change Volumes, Metro Mirror |
HU02519 & HU02520 |
All |
Critical
|
Safeguarded copy source vdisks go offline when its mappings and target vdisks are deleted then recreated in rapid succession
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.4.2 or later |
Trigger |
target vdisks are deleted then recreated in rapid succession |
Workaround |
There currently is no work around for the issue. The vdisk will be offline for as long as 5 minutes but should come back online on its own |
|
8.5.0.6 |
FlashCopy, Safeguarded Copy & Safeguarded Snapshots |
HU02540 |
All |
Critical
|
Deleting a HyperSwap volume copy with dependent Flashcopy mappings can trigger repeated node warmstarts
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems with FlashCopy on HyperSwap volumes |
Trigger |
Deleting a HyperSwap volume copy with dependent Flashcopy mappings |
Workaround |
None |
|
8.5.0.6 |
FlashCopy, HyperSwap |
HU02541 |
All |
Critical
|
In some circumstances, the deduplication replay process on a data reduction pool can become stuck. During this process, IO to the pool is quiesced and must wait for the replay to complete. Because it does not complete, IO to the entire storage pool hangs, which can eventually lead to a loss of access to data.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running DRP - 8.4.2, 8.5+ |
Trigger |
None |
Workaround |
Warmstart the nodes |
|
8.5.0.6 |
Data Reduction Pools, Deduplication |
HU02542 |
All |
Critical
|
On systems that are running 8.4.2 or 8.5.0, when deleting a Hyperswap volume, or Hyperswap volume copy, that has Safeguarded copy snapshots configured, a T2 recovery can occur causing loss of access to data.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems that are running 8.4.2 or 8.5.0 only |
Trigger |
Hyperswap volume, or Hyperswap volume copy, that has Safeguarded copy snapshots configured |
Workaround |
After hitting the T2 recovery, check if the Hyperswap volume (copy) still (partially) exists. If it does, remove it as soon as possible |
|
8.5.0.6 |
HyperSwap, Safeguarded Copy & Safeguarded Snapshots |
HU02551 |
All |
Critical
|
When creating multiple volumes with a high mirroring sync rate, an node warmstart maybe triggered due to internal resource constraints
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running 8.5.0, 8.5.1, 8.5.2 |
Trigger |
Creating muliple volumes with high mirroring sync rate |
Workaround |
Lower the sync rate to 100 when creating multiple volumes |
|
8.5.0.6 |
Global Mirror, Global Mirror With Change Volumes, Metro Mirror |
HU02561 |
All |
Critical
|
If there are a high number of FC mappings sharing the same target, the internal array that is used to track the FC mapping is mishandled, thereby causing it to overrun. This will cause a cluster wide warmstart to occur
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
System running v8.3.1,v8.4.0,v8.4.1,v8.4.2,v8.5.0 |
Trigger |
Cascaded flashcopy mappings and one of the flashcopy target volumes is the source of 255 flashcopy mappings |
Workaround |
None |
|
8.5.0.6 |
FlashCopy |
HU02563 |
All |
Critical
|
Improve dimm slot identification for memory errors
(show details)
Symptom |
Single Node Warmstart |
Environment |
System running v8.4.0,v8.4.1,v8.4.2,v8.5.0,v8.5.1,v8.5.2 |
Trigger |
Bad memory module |
Workaround |
Work with IBM Support to replace the DIMM that reported the uncorrectable error and replace it |
|
8.5.0.6 |
Reliability Availability Serviceability |
IT41088 |
FS5000, FS5100, FS5200, V5000, V5100 |
Critical
|
Systems with low memory that have a large number of RAID arrays that are resyncing can cause a system to run out of RAID rebuild control blocks
(show details)
Symptom |
Loss of Access to Data |
Environment |
Low memory systems such as 5015/5035 |
Trigger |
Systems with 64gb or less of cache with resync operations spread across multiple RAID arrays |
Workaround |
None |
|
8.5.0.6 |
RAID |
HU02466 |
All |
High Importance
|
An issue in the handling of drive failures can result in multiple node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v8.3.0 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.6 |
RAID |
HU02490 |
FS9500 |
High Importance
|
Upon first boot, or subsequent boots of a FS9500 a 1034 error may appear in the event log that states that the CPU PCIe link is degraded
(show details)
Symptom |
None |
Environment |
FAB4 platforms |
Trigger |
Boot up of the system |
Workaround |
This error can be marked as fixed and is not indicative of a hardware fault in the canister |
|
8.5.0.6 |
Reliability Availability Serviceability |
HU02507 |
All |
High Importance
|
A timing window exists in the code that handles host aborts for an ATS (Atomic Test and Set) command, if the host is NVMe-attached. This can cause repeated node warmstarts.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running V8.5.0 or V8.5.1 |
Trigger |
None |
Workaround |
None |
|
8.5.0.6 |
Host Cluster, Hosts |
HU02511 |
All |
High Importance
|
Code version 8.5.0 includes a change in the driver setting for the 25Gb ethernet adapter. This change can cause port errors, which in turn can cause iSCSI path loss symptoms
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running V8.5.0 |
Trigger |
None |
Workaround |
None |
|
8.5.0.6 |
Host Cluster, Hosts, SCSI Unmap, iSCSI |
HU02522 |
All |
High Importance
|
When upgrading from 8.4.1 or lower to a level that uses IP portsets (8.4.2 or higher), there is an issue when the port ID on each node has a different remote copy use
(show details)
Symptom |
None |
Environment |
Systems running v8.4.2 or later with IP portsets |
Trigger |
The same port id on each node has a different remote copy use |
Workaround |
Add the IP that wasn't set for remote copy to the replication portset with zero for portcount |
|
8.5.0.6 |
Global Mirror, Global Mirror With Change Volumes, Metro Mirror |
HU02525 |
FS5000, FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500, SVC, V7000 |
High Importance
|
Code versions 8.4.2.x, 8.5.0.0 - 8.5.0.5 and 8.5.1.0 permitted the use of an iSCSI prefix of 0. However, during an upgrade to 8.5.x, this can prevent all iSCSI hosts from re-establishing iSCSI sessions, thereby causing access loss
(show details)
Symptom |
None |
Environment |
Systems running v8.4.2,v8.5.0,v8.5.1 |
Trigger |
iSCSI prefix of 0 |
Workaround |
Change all hosts with a prefix of 0 before upgrading |
|
8.5.0.6 |
Hosts, iSCSI |
HU02530 |
All |
High Importance
|
Upgrades from 8.4.2 or 8.5 fail to start on some platforms
(show details)
Symptom |
None |
Environment |
System is running 8.4.2 or 8.5 with > 1 DRP pools. Seen on DH8 nodes but may affect other types |
Trigger |
None |
Workaround |
None |
|
8.5.0.6 |
System Update |
HU02534 |
All |
High Importance
|
When upgrading from 7.8.1.5 to 8.5.0.4, PowerHA stops working due to SSH configuration changes
(show details)
Symptom |
None |
Environment |
Systems running 8.5 and higher |
Trigger |
None |
Workaround |
The PowerHA script 'cl_verify_svcpprc_config' can be changed to use the actual username instead of 'admin' |
|
8.5.0.6 |
Reliability Availability Serviceability |
HU02549 |
All |
High Importance
|
When upgrading from a lower level, to 8.5 or higher for the first time, an unexpected node warmstart may occur that can lead to a stalled upgrade
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems upgrading from a lower level to 8.5 or higher |
Trigger |
First time upgrade to 8.5 or higher from a lower level |
Workaround |
None |
|
8.5.0.6 |
System Update |
HU02558 |
FS5000, FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500, SVC, V7000 |
High Importance
|
A timing window exists if a node encounters repeated timeouts on I/O compression requests. This can cause two threads to conflict with each other, thereby causing a deadlock condition to occur.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.2 and later, v8.3 and later, v8.4 and later, or v8.5 and later |
Trigger |
None |
Workaround |
None. When the node detects the deadlock condition, it warmstarts in order to clear the issue |
|
8.5.0.6 |
Compression |
HU02562 |
All |
High Importance
|
A node can warmstart when a 32 Gb Fibre Channel adapter receives an unexpected asynchronous event via internal mailbox commands. This is a transient failure caused during DMA operations
(show details)
Symptom |
Loss of Access to Data |
Environment |
Any system with a 32 Gb Fibre Channel adapter installed and running code level 8.4.0.4 or higher |
Trigger |
A 32 Gb Fibre Channel adapter receives an unexpected asynchronous event via internal mailbox commands |
Workaround |
None |
|
8.5.0.6 |
|
IT41447 |
All |
High Importance
|
When removing the DNS server configuration, a node may discover unexpected metadata and warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running 8.5 or higher |
Trigger |
Removal of DNS server configuration |
Workaround |
None |
|
8.5.0.6 |
Reliability Availability Serviceability |
IT41835 |
All |
High Importance
|
A T2 recovery may occur when a failed drive in the system is replaced with an unsupported drive type
(show details)
Symptom |
Loss of Access to Data |
Environment |
System that have drives reporting as UNSUPPORTED |
Trigger |
A drive with a tech type as UNSUPPORTED will cause this T2 during drive replacement. |
Workaround |
The system should recover automatically. To prevent the issue in the future, make sure system supported drive is used during replacement |
|
8.5.0.6 |
Drives |
HU02320 |
All |
Suggested
|
A battery fails to perform re-condition. This is identified when 'lsenclosurebattery' shows the 'last_recondition_timestamp' as an empty field on the impacted node
(show details)
Symptom |
None |
Environment |
Any system running v8.3.1 or v8.4.0 |
Trigger |
None |
Workaround |
Set the affected node into Service state as this will initiate battery reconditioning to start |
|
8.5.0.6 |
|
HU02372 |
FS9100, SVC, V5000, V5100, V7000 |
Suggested
|
Host SAS port 4 is missing from the GUI view on some systems.
(show details)
Symptom |
None |
Environment |
Any system that has SAS ports |
Trigger |
None |
Workaround |
Run the command lsportsas to view all ports |
|
8.5.0.6 |
Graphical User Interface |
HU02463 |
All |
Suggested
|
LDAP user accounts can become locked out because of multiple failed login attempts
(show details)
Symptom |
None |
Environment |
Systems where LDAP accounts use one-time passwords |
Trigger |
None |
Workaround |
Use one or more of the following options: Use CLI instead of GUI. With CLI, after auth cache expires, do not issue any more commands after experiencing a CMMVC7069E error. Log out and log back in with new LDAP password; Max out authentication cache - "chldap -authcacheminutes <min>". Maximum value is 1440min = 24h. Setting it to 600 minutes can minimize probability of hitting the issue; Disable account lock-out on LDAP. |
|
8.5.0.6 |
Graphical User Interface, LDAP |
HU02468 |
All |
Suggested
|
lsvdisk preferred_node_id filter not working correctly
(show details)
Symptom |
None |
Environment |
Systems running 8.4.2 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.6 |
Command Line Interface |
HU02474 |
All |
Suggested
|
An SFP failure can cause a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.4 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.6 |
Reliability Availability Serviceability |
HU02487 |
All |
Suggested
|
Problems expanding the size of a volume using the GUI
(show details)
Symptom |
None |
Environment |
Systems running 8.2.1 or later |
Trigger |
None |
Workaround |
Use the equivalent CMD line command instead such as expandvdisksize |
|
8.5.0.6 |
Graphical User Interface |
HU02508 |
All |
Suggested
|
The mkippartnership cli command does not allow a portset with a space in the name as a parameter.
(show details)
Symptom |
None |
Environment |
Systems running V8.5.0 |
Trigger |
portset containing a space |
Workaround |
Request customer recreate portset name without a space |
|
8.5.0.6 |
Command Line Interface |
HU02528 |
All |
Suggested
|
When upgrading to 8.5.0 or higher, a situation may occur whereby a variable is not locked at the correct point, resulting in a mismatch. The system code detects this and initiates a warmstart to reset any erroneous values
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems that have upgraded to v8.5 |
Trigger |
Can only occur during upgrade to 8.5 or higher |
Workaround |
None |
|
8.5.0.6 |
Reliability Availability Serviceability |
HU02543 |
All |
Suggested
|
After upgrade to 850, the 'lshost -delim' command shows hosts in offline state, while 'lshost' shows them online
(show details)
Symptom |
None |
Environment |
Systems running 8.5 or higher |
Trigger |
None |
Workaround |
None |
|
8.5.0.6 |
Command Line Interface |
HU02559 |
All |
Suggested
|
A GUI resource issue may cause an out-of-memory condition, leading to the CIMOM and GUI becoming unresponsive, or showing incomplete information
(show details)
Symptom |
None |
Environment |
Systems running v8.3.1,v8.4.0,v8.4.1,v8.4.2,v8.5.0 |
Trigger |
None |
Workaround |
Restart the cimserver service or tomcat service |
|
8.5.0.6 |
Graphical User Interface |
HU02560 |
All |
Suggested
|
When creating a SAS host using the GUI, portset is incorrectly added. The command fails with CMMVC9777E as the portset parameter is not supported with the given type of host.
(show details)
Symptom |
None |
Environment |
Systems running v8.5.0 or later with SAS attached hosts |
Trigger |
None |
Workaround |
None |
|
8.5.0.6 |
Graphical User Interface, Hosts |
HU02564 |
All |
Suggested
|
The 'charraymember' command fails with a degraded DRAID array, even though the syntax of the command is correct
(show details)
Symptom |
None |
Environment |
All |
Trigger |
Performing 'charraymember' command on a degraded DRAID array |
Workaround |
Use the '-immediate' option when running the 'charraymember' command |
|
8.5.0.6 |
Distributed RAID |
IT42403 |
All |
Suggested
|
A limit is in place to prevent the use of 8TB drives or larger in RAID5 arrays due to the risk of data loss during an extended rebuild. This limit was intended for 8 TiB however it was implemented as 8TB. A 7.3TiB drive has a capacity of 8.02TB and as a result was incorrectly prevented from use in RAID5
(show details)
Symptom |
None |
Environment |
System running v8.3.0,v8.3.1,v8.4.0,v8.4.1,v8.4.2,v8.5.0,v8.5.1 |
Trigger |
None |
Workaround |
Upgrade to 8.5.0.6 or request an ifix |
|
8.5.0.6 |
Distributed RAID, Drives, RAID |
SVAPAR-93987 |
All |
Suggested
|
A timeout may cause a single node warmstart, if a FlashCopy configuration change occurs while there are many I/O requests outstanding for a source volume which has multiple FlashCopy targets
(show details)
Symptom |
Single Node Warmstart |
Environment |
FlashCopy with multiple targets for a single source volume |
Trigger |
Timing window during FlashCopy configuration change |
Workaround |
None |
|
8.5.0.6 |
FlashCopy |
HU02500 |
All |
Critical
|
If a volume in a FlashCopy mapping is deleted, and the deletion fails (for example because the user does not have the correct permissions to delete that volume), node warmstarts can occur, leading to loss of access
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.5 with FlashCopy |
Trigger |
Failed deletion of a volume in a FlashCopy mapping |
Workaround |
Do not attempt volume deletion with a user that is not authorized |
|
8.5.0.5 |
FlashCopy |
HU02502 |
All |
Critical
|
On upgrade to v8.4.2 or later with FlashCopy active, a node warmstart can occur, leading to a loss of access
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.4.2 or later with FlashCopy |
Trigger |
Timing window during upgrade |
Workaround |
None |
|
8.5.0.5 |
FlashCopy |
IT41173 |
FS5200 |
Critical
|
If the temperature sensor in an FS5200 system fails in a particular way, it is possible for drives to be powered off, causing a loss of access to data. This type of temperature sensor failure is very rare.
(show details)
Symptom |
Loss of Access to Data |
Environment |
FS5200 |
Trigger |
Temperature sensor failure |
Workaround |
None |
|
8.5.0.5 |
Reliability Availability Serviceability |
HU02339 |
All |
High Importance
|
Multiple node warmstarts can occur if a system has direct Fibre Channel connections to an IBM i host, causing loss of access to data
(show details)
Symptom |
None |
Environment |
Systems running v8.4.0.4 and later, v8.4.2 and later, or v8.5 and later |
Trigger |
Direct attachment to IBM i hosts |
Workaround |
None |
|
8.5.0.5 |
Hosts, Interoperability |
HU02464 |
All |
High Importance
|
An issue in the processing of NVMe host logouts can cause multiple node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems with NVMe hosts |
Trigger |
None |
Workaround |
None |
|
8.5.0.5 |
Hosts, NVMe |
HU02479 |
All |
High Importance
|
If an NVMe host cancels a large number of I/O requests, multiple node warmstarts might occur
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running 8.4 or later using NVMeFC |
Trigger |
Cancelled I/O requests from an NVMe host |
Workaround |
None |
|
8.5.0.5 |
Hosts |
HU02492 |
SVC |
High Importance
|
Configuration backup can fail after upgrade to v8.5. This only occurs on a very small number of systems that have a particular internal cluster state. If a system is running v8.5 and does not have an informational eventlog entry with error ID 988100 (CRON job failed), then it is not affected.
(show details)
Symptom |
Configuration |
Environment |
Systems that have upgraded to v8.5 |
Trigger |
None |
Workaround |
None |
|
8.5.0.5 |
Reliability Availability Serviceability |
HU02497 |
All |
High Importance
|
A system with direct Fibre Channel connections to a host, or to another Spectrum Virtualize system, might experience multiple node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v8.4.0.4 and later, v8.4.2 and later, or v8.5 with direct-attached Fibre Channel connections |
Trigger |
Fibre Channel direct-attached hosts |
Workaround |
Connect affected HBAs via a Fibre Channel switch |
|
8.5.0.5 |
Hosts, Interoperability |
HU02512 |
FS5000 |
High Importance
|
An FS5000 system with a Fibre Channel direct-attached host can experience multiple node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v8.5 |
Trigger |
Fibre Channel direct-attached hosts |
Workaround |
None |
|
8.5.0.5 |
Hosts |
IT41191 |
All |
High Importance
|
If a REST API client authenticates as an LDAP user, a node warmstart can occur
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.5 |
Trigger |
REST API authentication with LDAP user |
Workaround |
Use a locally-authenticated user instead of an LDAP user |
|
8.5.0.5 |
REST API |
HU02484 |
All |
Suggested
|
The GUI does not allow expansion of DRP thin or compressed volumes
(show details)
Symptom |
None |
Environment |
Systems running 8.5 |
Trigger |
None |
Workaround |
Use the expandvdisksize CLI command instead |
|
8.5.0.5 |
Data Reduction Pools, Graphical User Interface |
HU02491 |
All |
Suggested
|
On upgrade from v8.3.x, v8.4.0 or v8.4.1 to v8.5, if the system has Global Mirror with Change Volumes relationships, a single node warmstart can occur
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems upgrading to v8.5 with GMCV |
Trigger |
None |
Workaround |
Stop all GMCV relationships before upgrading |
|
8.5.0.5 |
Global Mirror With Change Volumes |
HU02494 |
All |
Suggested
|
A system with a DNS server configured, which cannot ping the server, will log information events in the eventlog. In some environments the firewall blocks ping packets but allows DNS lookup, so this APAR disables these events.
(show details)
Symptom |
Configuration |
Environment |
Systems running v8.4.1, v8.4.2 or v8.5 with DNS servers that cannot be pinged. |
Trigger |
Firewall rules that block ping to DNS server |
Workaround |
Change firewall configuration to allow ping to DNS server |
|
8.5.0.5 |
Reliability Availability Serviceability |
HU02498 |
All |
Suggested
|
If a host object with no ports exists on upgrade to v8.5, the GUI volume mapping panel may fail to load.
(show details)
Symptom |
Configuration |
Environment |
Systems running v8.5 |
Trigger |
Upgrade to v8.5, when a host object has no ports specified |
Workaround |
None |
|
8.5.0.5 |
Graphical User Interface |
HU02501 |
All |
Suggested
|
If an internal I/O timeout occurs in a RAID array, a node warmstart can occur
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.5 |
Trigger |
I/O timeout on a RAID array |
Workaround |
None |
|
8.5.0.5 |
RAID |
HU02503 |
All |
Suggested
|
The Date / Time panel can fail to load in the GUI when a timezone set via the CLI is not supported by the GUI
(show details)
Symptom |
None |
Environment |
Systems running 8.5 or later |
Trigger |
Use CLI to set a timezone that is not supported in the GUI |
Workaround |
Configure the time zone via CLI |
|
8.5.0.5 |
Graphical User Interface |
HU02504 |
All |
Suggested
|
The Date / Time panel can display an incorrect timezone and default to manual time setting rather than NTP
(show details)
Symptom |
None |
Environment |
Systems running 8.5 or later |
Trigger |
None |
Workaround |
Configure the time zone via CLI |
|
8.5.0.5 |
Graphical User Interface |
HU02505 |
All |
Suggested
|
A single node warmstart can occur on v8.5 systems running DRP, due to a low-probability timing window during normal running
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.5 |
Trigger |
None |
Workaround |
None |
|
8.5.0.5 |
Data Reduction Pools |
HU02509 |
All |
Suggested
|
Upgrade to v8.5 can cause a single node warmstart, if nodes previously underwent a memory upgrade while DRP was in use
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.5 |
Trigger |
Upgrade to v8.5, after memory upgrade was performed while DRP was in use. |
Workaround |
None |
|
8.5.0.5 |
Data Reduction Pools |
HU02514 |
All |
Suggested
|
Firmware upgrade may fail for certain drive types, with the error message CMMVC6567E The Apply Drive Software task cannot be initiated because no download images were found in the package file
(show details)
Symptom |
Configuration |
Environment |
Systems running v8.5 |
Trigger |
Drive firmware upgrade |
Workaround |
None |
|
8.5.0.5 |
Drives |
HU02515 |
FS9500 |
Suggested
|
Fan speed on FlashSystem 9500 can be higher than expected, if a high drive temperature is detected
(show details)
Symptom |
None |
Environment |
FlashSystem 9500 running v8.5 |
Trigger |
None |
Workaround |
None |
|
8.5.0.5 |
Drives |
HU02506 |
All |
Critical
|
On a system where NPIV is disabled or in transitional mode, certain hosts may fail to log in after a node warmstart or reboot (for example during an upgrade), leading to loss of access.
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems where NPIV is not enabled. |
Trigger |
Upgrade from pre-8.5 to 8.5 software or later |
Workaround |
None |
|
8.5.0.4 |
Hosts |
HU02441 & HU02486 |
All |
Critical
|
Safeguarded Copy with DRP can cause node warmstarts and mdisk timeouts
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.4.2 or later using Safeguarded Copy with DRP |
Trigger |
None |
Workaround |
None |
|
8.5.0.3 |
Data Reduction Pools, Safeguarded Copy & Safeguarded Snapshots |
HU02488 |
All |
High Importance
|
Remote Copy partnerships disconnect every 15 minutes with error 987301 (Connection to a configured remote cluster has been lost)
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running 8.5.0 in replication partnerships with systems running 8.2.1 or 8.3.0 |
Trigger |
None |
Workaround |
Upgrade partner systems to 8.3.1 or later |
|
8.5.0.3 |
Global Mirror, Global Mirror With Change Volumes, Metro Mirror |
HU02453 |
All |
Suggested
|
It may not be possible to connect to GUI or CLI without a restart of the Tomcat server
(show details)
Symptom |
None |
Environment |
Systems running v8.4 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.2 |
Command Line Interface, Graphical User Interface |
IT40059 |
FS5200, FS7200, FS7300, FS9200, FS9500 |
Suggested
|
Port to node metrics can appear inflated due to an issue in performance statistics aggregation
(show details)
Symptom |
None |
Environment |
Systems running 8.4 or later using Message Passing |
Trigger |
None |
Workaround |
None |
|
8.5.0.2 |
Inter-node messaging, System Monitoring |
HU02261 |
All |
HIPER
|
A Data Reduction Pool may be taken offline when metadata is detected to hold an invalid compression flag. For more details refer to this Flash
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.1 or later using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Data Reduction Pools |
HU02277 |
All |
HIPER
|
RAID parity scrubbing can become stalled causing an accumulation of media errors leading to multiple drive failures with the possibility of data integrity loss. For more details refer to this Flash
(show details)
Symptom |
Data Integrity Loss |
Environment |
Systems with model MZILS3T8HMLH read intensive SSDs at drive firmware MS24 are particularly susceptible to the data integrity (DI) issue. Other drive types may see multiple failures without DI issue |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
RAID |
HU02296 |
All |
HIPER
|
The zero page functionality can become corrupt causing a volume to be initialised with non-zero data
(show details)
Symptom |
Data Integrity Loss |
Environment |
Systems running v8.2.1 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Storage Virtualisation |
HU02310 |
All |
HIPER
|
Where a FlashCopy mapping exists between two volumes in the same Data Reduction Pool and the same I/O group, and the target volume has deduplication enabled, then the target may contain invalid data
(show details)
Symptom |
Data Integrity Loss |
Environment |
Systems running v8.4 or later using Data Reduction Pools with FlashCopy or GMCV |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Data Reduction Pools, FlashCopy, Global Mirror With Change Volumes |
HU02312 |
All |
HIPER
|
Changing the preferred node for a volume when it is in a remote copy relationship can result in multiple node warmstarts. For more details refer to this Flash
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v8.4 or later using remote copy |
Trigger |
Change preferred node for a volume in a remote copy relationship |
Workaround |
Remove any associated remote copy relationship before changing the preferred node of a volume |
|
8.5.0.0 |
Global Mirror, Global Mirror With Change Volumes, Metro Mirror |
HU02313 |
FS5100, FS7200, FS9100, FS9200, V5100, V7000 |
HIPER
|
When a FlashCore Module (FCM) fails there is a chance that this can trigger other FCMs in the same control enclosure to also fail. If enough additional drives fail, at the same time, this can take the array offline and cause a loss of access to data. For more details refer to this Flash
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.2 or later using Flash Core Modules |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Drives |
HU02338 |
All |
HIPER
|
An issue in the setting up of reverse FlashCopy mappings can cause the background copy to finish prematurely providing an incomplete target image
(show details)
Symptom |
Data Integrity Loss |
Environment |
Systems using FlashCopy |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
FlashCopy |
HU02340 |
All |
HIPER
|
High replication workloads can cause multiple warmstarts with a loss of access at the partner cluster
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.3.1 or later using IP Replication |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
IP Replication |
HU02384 |
SVC |
HIPER
|
An inter-node message queue can become stalled, leading to an I/O timeout warmstart, and temporary loss of access
(show details)
Symptom |
Offline Volumes |
Environment |
SVC systems using SV1 model nodes running v8.3.1 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Reliability Availability Serviceability |
HU02400 |
All |
HIPER
|
A problem in the virtualization component of the system can cause a migration IO to be submitted in an incorrect context resulting in a node warmstart. In some cases it is possible that this IO has been submitted to an incorrect location on the backend, which can cause data corruption of an isolated small area
(show details)
Symptom |
Data Integrity Loss |
Environment |
Systems running v8.2.1 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Storage Virtualisation |
HU02418 |
All |
HIPER
|
During a DRAID array rebuild data can be written to an incorrect location. For more details refer to this Flash
(show details)
Symptom |
Data Integrity Loss |
Environment |
Systems running v8.3.1 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Distributed RAID, RAID |
DT112601 |
All |
Critical
|
Deleting image mode mounted source volume while migration is ongoing could trigger Tier 2 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.3.1 or later |
Trigger |
Delete the source volume when migration progress is showing 0% |
Workaround |
Wait for lsmigrate progress to report a non-zero progress value before issuing a volume delete |
|
8.5.0.0 |
Storage Virtualisation |
HU02226 |
All |
Critical
|
Due to an issue in DRP a node can repeatedly warmstart whilst rejoining a cluster
(show details)
Symptom |
Offline Volumes |
Environment |
Systems running v8.3.1 or later using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Data Reduction Pools |
HU02282 |
All |
Critical
|
After a code upgrade the config node may exhibit high write response times. In exceptionally rare circumstances an Mdisk group may be taken offline
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.2.1 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Cache |
HU02295 |
SVC |
Critical
|
When upgrading from v8.2.1 or v8.3, in the presence of hot spare nodes, an issue with the handling of node metadata may cause a Tier 2 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
SVC systems running v8.2.1 or v8.3 with Hot Spare Node |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
System Update |
HU02309 |
All |
Critical
|
Due to a change in how FlashCopy and remote copy interact, multiple warmstarts may occur with the possibility of lease expiries
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using GMCV |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Global Mirror With Change Volumes |
HU02315 |
All |
Critical
|
Failover for VMware iSER hosts may pause I/O for more than 120 seconds
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems presenting volumes to VMware iSER hosts |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Hosts |
HU02321 |
All |
Critical
|
Where nodes relying on RDMA clustering alone, if a node is removed, warmstarts or goes down for upgrade there may be a delay in internode communication leading to lease expiries
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using iSER RDMA clustering with iSCSI hosts |
Trigger |
None |
Workaround |
When upgrading, add an alternative, non-iSER medium for node to node communications |
|
8.5.0.0 |
iSCSI |
HU02328 |
FS5100, FS7200, FS9100, FS9200, V5100, V7000 |
Critical
|
Due to an issue with the handling of NVMe registration keys, changing the node WWNN in an active system will cause a lease expiry
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems with NVMe drives |
Trigger |
Change a node WWNN in an active system |
Workaround |
None |
|
8.5.0.0 |
NVMe |
HU02342 |
All |
Critical
|
Occasionally when an offline drive returns to online state later than its peers in the same RAID array there can be multiple node warmstarts that send nodes into a service state
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.8 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
RAID |
HU02349 |
All |
Critical
|
Using an incorrect FlashCopy consistency group id to stop consistency group will result in T2 recovery if the incorrect id is >501
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.3.1 or later using FlashCopy |
Trigger |
Stop FlashCopy consistency group using an incorrect id of >501 |
Workaround |
Exercise greater care when stopping FlashCopy consistency groups where the id >501 |
|
8.5.0.0 |
FlashCopy |
HU02368 |
All |
Critical
|
When consistency groups from code levels prior to v8.3 are carried through to v8.3 or later then there can be multiple node warmstarts with the possibility of a loss of access
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.3 or later using HyperSwap |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
HyperSwap |
HU02373 |
All |
Critical
|
An incorrect compression flag in metadata can take a DRP offline
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.3.1 or later using Data Reduction Pools and Remote Copy |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Data Reduction Pools |
HU02374 |
SVC, V5000, V7000 |
Critical
|
Hosts with Emulex 16Gbps HBAs may become unable to communicate with a system with 8Gbps Fibre Channel ports, after the host HBA is upgraded to firmware version 12.8.364.11. This does not apply to systems with 16Gb or 32Gb Fibre Channel ports
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems with 8Gbps Fibre Channel ports |
Trigger |
Host Emulex 16Gbps HBA is upgraded to firmware version 12.8.364.11 |
Workaround |
Do not upgrade host HBA to firmware version 12.8.364.11 |
|
8.5.0.0 |
Hosts |
HU02378 |
All |
Critical
|
Multiple maximum replication delay events and Remote Copy relationship restarts can cause multiple node warmstarts with the possibility of a loss of access
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.2.1 using remote copy |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Global Mirror, Global Mirror With Change Volumes, Metro Mirror |
HU02393 |
All |
Critical
|
Automatic resize of compressed/thin volumes may fail causing warmstarts on both nodes in an I/O group
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.1 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Storage Virtualisation |
HU02397 |
All |
Critical
|
A Data Reduction Pool, with deduplication enabled, can retain some stale state after deletion and recreation. This has no immediate effect. However if later on a node goes offline this condition can cause the pool to be taken offline
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.1.3 or later using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Data Reduction Pools |
HU02401 |
All |
Critical
|
EasyTier can move extents between identical mdisks until one runs out of space
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.1 or later using EasyTier |
Trigger |
None |
Workaround |
Disable EasyTier. Manually migrate extents between mdisks |
|
8.5.0.0 |
EasyTier |
HU02402 |
All |
Critical
|
The remote support feature may use more memory than expected causing a temporary loss of access
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.4 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Support Remote Assist |
HU02406 |
All |
Critical
|
An interoperability issue between Cisco NX-OS firmware and the Spectrum Virtualize Fibre Channel driver can cause a node warmstart on NPIV failback (for example during an upgrade) with the potential for a loss of access. For more details refer to this Flash
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using NPIV that are connected to Cisco SAN equipment running NX-OS 8.4(2c) or later |
Trigger |
Initiate an NPIV failback operation by, for example, performing an upgrade |
Workaround |
Disable NPIV (which will require any hot spare nodes to be removed first) |
|
8.5.0.0 |
Interoperability |
HU02409 |
All |
Critical
|
If rmhost command, with -force, is executed for a MS Windows server then an issue in the iSCSI driver can cause the relevant target initiator to become unresponsive
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems with iSCSI connected hosts |
Trigger |
Run rmhost -force CLI command for an iSCSI connected MS Windows host |
Workaround |
Do not use -force when removing a host object |
|
8.5.0.0 |
Hosts, iSCSI |
HU02410 |
SVC |
Critical
|
A timing window issue in the transition to a spare node can cause a cluster-wide Tier 2 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
SVC systems running v8.1 or later with Hot Spare Nodes |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Hot Spare Node |
HU02414 |
All |
Critical
|
Under specific sequence and timing of circumstances the garbage collection process can timeout and take a pool offline temporarily
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.3 or later using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Data Reduction Pools |
HU02415 |
All |
Critical
|
An issue in garbage collection IO flow logic can take a pool offline temporarily
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.3 or later using DRP |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Data Reduction Pools |
HU02421 |
All |
Critical
|
A logic fault in the socket communication sub-system can cause multiple node warmstarts when more than 8 external clients attempt to connect. It is possible for this to lead to a loss of access
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.4.2.0 |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Reliability Availability Serviceability |
HU02423 |
All |
Critical
|
Volume copies may be taken offline even though there is sufficient free capacity
(show details)
Symptom |
Offline Volumes |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Data Reduction Pools |
HU02428 |
All |
Critical
|
Issuing a movevdisk CLI command immediately after removing an associated GMCV relationship can trigger a Tier 2 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.4 or later using GMCV |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Command Line Interface, Global Mirror With Change Volumes |
HU02429 |
All |
Critical
|
System can go offline shortly after changing the SMTP settings using the chemailserver command via the GUI
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.8.1 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
System Monitoring |
HU02430 |
All |
Critical
|
Expanding or shrinking the real size of FlashCopy target volumes can cause recurring node warmstarts and may cause nodes to revert to candidate state
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.4.2.0 using FlashCopy |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
FlashCopy |
HU02434 |
All |
Critical
|
An issue in the internal accounting of FlashCopy resources can lead to multiple node warmstarts taking a cluster offline
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.4 or later using large (>5TB) volumes |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
FlashCopy |
HU02435 |
All |
Critical
|
The removal of deduplicated volumes can cause repeated node warmstarts and the possibility of offline Data Reduction Pools
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.4.2.0 using DRP |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Data Reduction Pools |
HU02440 |
All |
Critical
|
Using the migrateexts command when both source and target mdisks are unmanaged can trigger a Tier 2 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.4 or later |
Trigger |
Issue the migrateexts command when both source and target mdisks are unmanaged |
Workaround |
Ensure that the mdisks are managed before migrating extents |
|
8.5.0.0 |
Command Line Interface, Storage Virtualisation |
HU02442 |
All |
Critical
|
Issuing a lspotentialarraysize CLI command with an invalid drive class can trigger a Tier 2 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.4 or later |
Trigger |
Enter lspotentialarraysize CLI command with an invalid drive class parameter |
Workaround |
Before entering a lspotentialarraysize CLI command ensure that the drive class parameter is valid |
|
8.5.0.0 |
Command Line Interface |
HU02455 |
All |
Critical
|
After converting a system from 3-site to 2-site a timing window issue can trigger a cluster tier 2 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.3 or later in a 3-site topology |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
3-Site using HyperSwap or Metro Mirror |
HU02088 |
All |
High Importance
|
There can be multiple node warmstarts when no mailservers are configured
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v8.1 or later |
Trigger |
None |
Workaround |
Configure a mailserver |
|
8.5.0.0 |
System Monitoring |
HU02127 |
All |
High Importance
|
32Gbps FC ports will auto-negotiate to 8Gbps, if they are connected to a 16Gbps Cisco switch port
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running v8.3 or later with 32Gbps HBAs connecting to 16Gbps Cisco switch ports |
Trigger |
Use auto-negotiate default on switch port |
Workaround |
Manually set the switch port as an F-port operating at 16Gbps |
|
8.5.0.0 |
Performance |
HU02201 & HU02221 |
All |
High Importance
|
Shortly after upgrading drive firmware, specific drive models can fail due to Too many long IOs to drive for too long errors
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems with the following drive models:
- ST300MM0009 (300GB) - B5B8
- ST600MM0009 (600GB) - B5B8
- ST900MM0009 (900GB) - B5B8
- ST1200MM0009 (1200GB) - B5B8
- ST1200MM0129 (1800GB) - B5C9
- ST2400MM0129 (2400GB) - B5C9
- ST300MP0006 (300GB) - B6AA
- ST600MP0006 (600GB) - B6AA
- ST900MP0146 (900GB) - B6CB
|
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Drives |
HU02227 |
FS7200, FS9100, FS9200, SVC, V5100, V7000 |
High Importance
|
Certain I/O patterns can cause compression hardware to post errors. When those errors exceed a threshold the node can be taken offline
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running v8.2 or later using compressed volumes |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Compression |
HU02273 |
All |
High Importance
|
When write I/O workload to a HyperSwap volume site reaches a certain thresholds, the system should switch the primary and secondary copies. There are circumstances where this will not happen
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running v8.1 or later using HyperSwap |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
HyperSwap |
HU02290 |
All |
High Importance
|
An issue in the virtualization component can divide up IO resources incorrectly leading to adverse impact on queuing times for mdisks CPU cores leading to performance impact
(show details)
Symptom |
Performance |
Environment |
Systems with large configurations in terms of host logins and mdisks |
Trigger |
None |
Workaround |
Reduce the number of target ports in use |
|
8.5.0.0 |
Storage Virtualisation |
HU02297 |
All |
High Importance
|
Error handling for a failing backend controller can lead to multiple warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems attached to faulty backend controllers |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Backend Storage |
HU02300 |
All |
High Importance
|
Use of Enhanced Callhome in censored mode may lead to adverse performance around 02:00 (2AM)
(show details)
Symptom |
Performance |
Environment |
Systems with Enhanced Callhome enabled in censored mode |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
System Monitoring |
HU02301 |
SVC |
High Importance
|
iSCSI hosts connected to iWARP 25G adapters may experience adverse performance impacts
(show details)
Symptom |
Performance |
Environment |
SVC model SV1 systems with iWARP 25G adapters |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
iSCSI |
HU02304 |
FS9100, V5100, V7000 |
High Importance
|
Some RAID operations for certain NVMe drives may cause adverse I/O performance
(show details)
Symptom |
Performance |
Environment |
Systems running v8.4 or later using NVMe drives |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
RAID |
HU02311 |
All |
High Importance
|
An issue in volume copy flushing may lead to higher than expected write cache delays
(show details)
Symptom |
Performance |
Environment |
Systems running v8.3.1 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Cache |
HU02317 |
All |
High Importance
|
A DRAID expansion can stall shortly after it is initiated
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running v8.3.1 or later using DRAID |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Distributed RAID |
HU02319 |
All |
High Importance
|
The GUI can become unresponsive
(show details)
Symptom |
None |
Environment |
Systems running v8.4.0.1, or later, using remote copy without 3-site replication |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Graphical User Interface |
HU02326 |
SVC |
High Importance
|
Delays in passing messages between nodes in an I/O group can adversely impact write performance
(show details)
Symptom |
Performance |
Environment |
SVC systems running v8.2.1 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Performance |
HU02343 |
All |
High Importance
|
For Huawei Dorado V3 Series backend controllers it is possible that not all available target ports will be utilized. This would reduce the potential IO throughput and can cause high read/write backend queue time on the cluster impacting front end latency for hosts
(show details)
Symptom |
Performance |
Environment |
Systems using Huawei Dorado V3 Series backend controllers |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Backend Storage |
HU02345 |
All |
High Importance
|
When connectivity to nodes in a local or remote cluster is lost, inflight IO can become stuck in an aborting state, consuming system resources and potentially adversely impacting performance
(show details)
Symptom |
Performance |
Environment |
Systems using remote copy |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
HyperSwap, Metro Mirror |
HU02347 |
All |
High Importance
|
An issue in the handling of boot drive failure can lead to the partner drive also being failed
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running v8.2.1 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Reliability Availability Serviceability |
HU02360 |
All |
High Importance
|
Cloud Callhome may stop working and provide no indication of this in the event log. For more details refer to this Flash
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running v8.3 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
System Monitoring |
HU02362 |
FS5100, FS5200, FS7200, FS9100, FS9200, SVC, V5100, V7000 |
High Importance
|
When the RAID scrub process encounters bad grains, the peak response time for reads and writes can be adversely impacted
(show details)
Symptom |
Performance |
Environment |
Systems running v8.2 or later using NVMe drives |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
RAID |
HU02376 |
All |
High Importance
|
FlashCopy maps may get stuck at 99% due to inconsistent metadata accounting between nodes
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running v8.2.1 or later using FlashCopy |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
FlashCopy |
HU02388 |
FS5000, V5000 |
High Importance
|
GUI can hang randomly due to an out of memory issue after running any task
(show details)
Symptom |
Loss of Redundancy |
Environment |
Storwize V5000E and FlashSystem 5000 systems running v8.2.1 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Graphical User Interface |
HU02392 |
All |
High Importance
|
Validation in the Upload Support Package feature will reject new case number formats in the PMR field
(show details)
Symptom |
None |
Environment |
All systems |
Trigger |
None |
Workaround |
At v8.4.0.0 or later use the CLI command satask supportupload -pmr pmr_number -filename fullpath/filename |
|
8.5.0.0 |
Support Data Collection |
HU02417 |
All |
High Importance
|
Restoring a reverse FlashCopy mapping to a volume that is also the source of an incremental FlashCopy mapping can take longer than expected
(show details)
Symptom |
Performance |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
FlashCopy |
HU02422 |
All |
High Importance
|
GUI performance can be degraded when displaying large numbers of volumes or other objects
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running v8.3.1 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Graphical User Interface |
HU02438 |
All |
High Importance
|
Certain conditions can provoke a cache behaviour that unbalances workload distribution across CPU cores leading to performance impact
(show details)
Symptom |
Performance |
Environment |
Systems running v8.3.1 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Cache |
HU02439 |
All |
High Importance
|
An IP partnership between a pre-v8.4.2 system and v8.4.2 or later system may be disconnected because of a keepalive timeout
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running 8.3.1 or earlier in IP partnerships with systems running v8.4.2 or later |
Trigger |
None |
Workaround |
Ensure both systems are running v8.4.2 or later |
|
8.5.0.0 |
IP Replication |
HU02460 |
All |
High Importance
|
Multiple node warmstarts triggered by ports on the 32G fibre channel adapter failing
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v8.3 or later using 32Gbps HBAs |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Hosts |
IT38015 |
All |
High Importance
|
During RAID rebuild or copyback on systems with 16gb or less of memory, cache handling can lead to a deadlock which results in timeouts
(show details)
Symptom |
Performance |
Environment |
Systems with 16GB or less of memory |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
RAID |
HU01209 |
All |
Suggested
|
It is possible for the Fibre Channel driver to be offered an unsupported length of data resulting in a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Fibre Channel connectivity |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Storage Virtualisation |
HU02095 |
All |
Suggested
|
The effective_used_capacity field of lsarray/lsmdisk commands should be empty for RAID arrays which do not contain overprovisioned drives. However, sometimes this field can be zero even though it should be empty. This can cause incorrect provisioned capacity reporting in the GUI
(show details)
Symptom |
None |
Environment |
Systems with non-FCM arrays |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Graphical User Interface |
HU02171 |
All |
Suggested
|
The timezone for Iceland is set incorrectly
(show details)
Symptom |
None |
Environment |
Systems using the Icelandic timezone |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Support Data Collection |
HU02174 |
All |
Suggested
|
A timing window issue related to remote copy memory allocation can result in a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using remote copy |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Global Mirror, Global Mirror With Change Volumes, Metro Mirror |
HU02243 |
All |
Suggested
|
DMP for 1670 event (replace CMOS) will shutdown a node without confirmation from user
(show details)
Symptom |
None |
Environment |
Systems with expired CMOS batteries |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
GUI Fix Procedure |
HU02263 |
All |
Suggested
|
The pool properties dialog in the GUI displays thin-provisioning savings, compression savings and total savings. In Data Reduction Pools, the thin-provisioning savings displayed are actually the total savings instead of the thin-provisioning savings only
(show details)
Symptom |
None |
Environment |
Systems running v8.2.1 or later using Data Reduction Pools |
Trigger |
None |
Workaround |
Subtract compression savings from thin-provisioning savings to get the actual number |
|
8.5.0.0 |
Data Reduction Pools |
HU02274 |
All |
Suggested
|
Due to a timing issue in how events are handled an active quorum loss and re-acquisition cycle can be triggered with a 3124 error
(show details)
Symptom |
None |
Environment |
Systems running v8.3.1 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Quorum |
HU02280 |
All |
Suggested
|
Spectrum Control or Storage Insights may be unable to collect stats after a Tier 2 recovery or system powerdown
(show details)
Symptom |
None |
Environment |
Systems running v8.3.1.2 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
System Monitoring |
HU02291 |
All |
Suggested
|
Internal counters for upper cache stage/destage I/O rates and latencies are not collected and zeroes are usually displayed
(show details)
Symptom |
None |
Environment |
Systems running v8.4 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Cache, System Monitoring |
HU02292 & HU02308 |
All |
Suggested
|
The use of maximum replication delay within Global Mirror may occasionally cause a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Remote Copy |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Global Mirror |
HU02303 & HU02305 |
All |
Suggested
|
Configuration node warmstart will occur if mkhostcluster is run with -ignoreseedvolume and the ignored volumes have an id greater than 256
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.3, or later, where ignored volumes have an id greater than 256 |
Trigger |
Run mkhostcluster with its -ignoreseedvolume option |
Workaround |
Do not use the -ignoreseedvolume option with the mkhostcluster command |
|
8.5.0.0 |
Hosts |
HU02306 |
All |
Suggested
|
An offline host port can still be shown as active in lsfabric and the associated host can be shown as online despite being offline
(show details)
Symptom |
None |
Environment |
Systems running v8.3 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Hosts |
HU02325 |
All |
Suggested
|
Tier 2 and Tier 3 recoveries can fail due to node warmstarts
(show details)
Symptom |
None |
Environment |
Systems running v8.4 with remote (LDAP) users that were created at v8.3.1 or earlier |
Trigger |
None |
Workaround |
Recreate all remote users after upgrading to v8.4 |
|
8.5.0.0 |
Reliability Availability Serviceability |
HU02331 |
All |
Suggested
|
Due to a threshold issue an error code 3400 may appear too often in the event log
(show details)
Symptom |
None |
Environment |
Systems using compression cards |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Compression |
HU02332 & HU02336 |
All |
Suggested
|
When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.8 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Hosts |
HU02346 |
All |
Suggested
|
A mismatch between LBA stored by snapshot and disk allocator processes in the thin-provisioning component may cause a single node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.3.1 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Thin Provisioning |
HU02366 |
All |
Suggested
|
Slow internal resource reclamation by the RAID component can cause a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.2.1 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
RAID |
HU02367 |
All |
Suggested
|
An issue with how RAID handles drive failures may lead to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
RAID |
HU02375 |
All |
Suggested
|
An issue in how the GUI handles volume data can adversely impact its responsiveness
(show details)
Symptom |
Performance |
Environment |
Systems running v8.3.1 or later with large numbers of volumes |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Graphical User Interface |
HU02381 |
All |
Suggested
|
When the proxy server password is changed to one with more than 40 characters the config node will warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using the system-wide web proxy server |
Trigger |
Use chproxy CLI command to change password to one with >40 characters |
Workaround |
Use a proxy password of <40 characters |
|
8.5.0.0 |
Command Line Interface |
HU02382 |
FS5100, FS7200, FS9100, FS9200, V5100, V7000 |
Suggested
|
A complex interaction of tasks, including drive firmware cleanup and syslog reconfiguration, can cause a 10 second delay when each node unpends (eg during an upgrade)
(show details)
Symptom |
None |
Environment |
Systems running v8.2.1 or later that have a remote syslog server configured |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
System Update |
HU02383 |
FS5100, FS7200, FS9100, FS9200, V7000 |
Suggested
|
An additional 20 second IO delay can occur when a system update commits
(show details)
Symptom |
None |
Environment |
Systems running v8.2.1 or later that have a remote syslog server configured |
Trigger |
None |
Workaround |
Remove remote syslog servers from the configuration to reduce the additional delay to 10 seconds. It is not possible to completely eliminate the delay using a workaround |
|
8.5.0.0 |
System Update |
HU02385 |
All |
Suggested
|
Unexpected emails from Inventory Script can be found on mailserver
(show details)
Symptom |
None |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
System Monitoring |
HU02386 |
FS5100, FS7200, FS9100, FS9200, V7000 |
Suggested
|
Enclosure fault LED can remain on due to race condition when location LED state is changed
(show details)
Symptom |
None |
Environment |
Systems running v8.1 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
System Monitoring |
HU02387 |
All |
Suggested
|
When using the GUI the maximum Data Reduction Pools limitation incorrectly includes child pools
(show details)
Symptom |
None |
Environment |
Systems running v8.4 or later using Data Reduction Pools |
Trigger |
Add a new Data Reduction Pool when 4 x parent and child pools already exist |
Workaround |
Use CLI to create the new Data Reduction Pool |
|
8.5.0.0 |
Data Reduction Pools |
HU02391 |
All |
Suggested
|
An issue with how websockets connections are handled can cause the GUI to become unresponsive requiring a restart of the Tomcat server
(show details)
Symptom |
None |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Graphical User Interface |
HU02405 |
FS5200 |
Suggested
|
An issue in the zero detection of the new Message Passing (MP) functionality can cause thin volumes to allocate space when writing zeros
(show details)
Symptom |
None |
Environment |
FlashSystem 5200 systems |
Trigger |
None |
Workaround |
When writing zeros from a host, always submit IO to the preferred node |
|
8.5.0.0 |
Inter-node messaging |
HU02411 |
FS5100, FS7200, FS9100, FS9200, V5100, V7000 |
Suggested
|
An issue in the NVMe drive presence checking can result in a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems with NVMe drives |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
NVMe |
HU02416 |
All |
Suggested
|
A timing window issue in DRP can cause a valid condition to be deemed invalid triggering a single node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using DRP |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Data Reduction Pools |
HU02419 |
All |
Suggested
|
During creation of a drive FRU id the resulting unique number can contain a space character which can lead to CLI commands, that return this value, presenting it as a truncated string
(show details)
Symptom |
None |
Environment |
Systems running v8.3.1 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Command Line Interface, Drives |
HU02425 |
All |
Suggested
|
An issue in the handling of internal messages, when the system has a high IO workload to two or more different FlashCopy maps in the same dependency chain, can result in incorrect counters. The node will warmstart to clear this condition.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using FlashCopy |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
FlashCopy |
HU02426 |
All |
Suggested
|
Where an email server accepts the STARTTLS command during the initial handshake if TLS v1.2 is disables or not supported then the system will be unable to send email alerts
(show details)
Symptom |
None |
Environment |
Systems running v8.4 or later connecting to a mail server that does not support/enable TLS v1.2 |
Trigger |
TLS v1.2 not supported or enabled on mail server |
Workaround |
Enable TLS v1.2 on mail server if available |
|
8.5.0.0 |
System Monitoring |
HU02437 |
All |
Suggested
|
Error 2700 is not reported in the Event Log when an incorrect NTP server IP is entered
(show details)
Symptom |
None |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
System Monitoring |
HU02443 |
All |
Suggested
|
An inefficiency in the RAID code that processes requests to free memory can cause the request to timeout leading to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.2.1 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
RAID |
HU02444 |
All |
Suggested
|
Some security scanners can report unauthenticated targets against all the iSCSI IP addresses of a node
(show details)
Symptom |
None |
Environment |
Systems running v8.4.0 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Hosts, iSCSI |
HU02445 |
All |
Suggested
|
When attempting to expand a volume, if the volume size is greater than 1TB the GUI may not display the expansion pop-up window
(show details)
Symptom |
None |
Environment |
Systems running v8.4.2 or later |
Trigger |
None |
Workaround |
Use the CLI |
|
8.5.0.0 |
Graphical User Interface |
HU02448 |
All |
Suggested
|
IP Replication statistics displayed in the GUI and XML can be incorrect
(show details)
Symptom |
None |
Environment |
Systems running v8.4.2 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
System Monitoring |
HU02450 |
FS5100, FS5200, FS7200, FS9100, FS9200, SVC, V5100, V7000 |
Suggested
|
A defect in the frame switching functionality of 32Gbps HBA firmware can cause a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.3.1 or less using 32Gbps HBAs |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Hosts |
HU02452 |
FS5100, FS5200, FS7200, FS9100, FS9200, SVC, V5100, V7000 |
Suggested
|
An issue in NVMe I/O write functionality can cause a single node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.4 or later using NVMe |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
NVMe |
HU02454 |
All |
Suggested
|
Large numbers of 2251 errors are recorded in the Event Log even though LDAP appears to be working
(show details)
Symptom |
None |
Environment |
Systems running v8.3.1 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
LDAP |
HU02461 |
All |
Suggested
|
Livedump collection can fail multiple times
(show details)
Symptom |
None |
Environment |
Systems running v8.4 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Support Data Collection |
HU02593 |
All |
Suggested
|
NVMe drive is incorrectly reporting end of life due to flash degradation
(show details)
Symptom |
Error in Error Log |
Environment |
V8.2,V8.2.1,V8.3.0,V8.3.1 |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Drives |
IT33996 |
All |
Suggested
|
An issue in RAID where unreserved resources fail to be freed up can result in a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.3.1 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
RAID |
IT34949 |
All |
Suggested
|
lsnodevpd may show DIMM information in the wrong positions
(show details)
Symptom |
None |
Environment |
Systems running v8.4 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
Command Line Interface, Graphical User Interface |
IT34958 |
All |
Suggested
|
During a system update a node returning to the cluster, after upgrade, may warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.2.1 or later |
Trigger |
None |
Workaround |
None |
|
8.5.0.0 |
System Update |
IT37654 |
All |
Suggested
|
When creating a new encrypted array the CMMVC8534E error (Node has insufficient entropy to generate key material) can appear preventing array creation
(show details)
Symptom |
Configuration |
Environment |
Systems using encryption |
Trigger |
None |
Workaround |
Power cycle the affected node |
|
8.5.0.0 |
Encryption |
IT38858 |
All |
Suggested
|
Unable to resume Enable USB Encryption wizard via the GUI. The GUI will display error CMMVC9231E
(show details)
Symptom |
None |
Environment |
Systems running v8.4 or later |
Trigger |
Close/refresh browser before wizard is complete |
Workaround |
None |
|
8.5.0.0 |
Graphical User Interface |