HU02572 |
All |
HIPER
|
When controllers running specified code levels with SAS storage are power cycled or rebooted, there is a chance that 56 bytes of data will be incorrectly restored into the cache, leading to undetected data corruption. The system will attempt to flush the cache before an upgrade, so this defect is less likely during an upgrade.
(show details)
Symptom |
Data Integrity Loss |
Environment |
Controllers running 8.5.0.0 through 8.5.0.6, 8.5.1, 8.5.2 and 8.5.3 must have SAS storage to be vulnerable to this defect. |
Trigger |
A power cycle or node reboot while the cache is not empty can trigger this defect |
Workaround |
None |
|
8.5.4.0 |
Drives |
SVAPAR-90459 |
All |
HIPER
|
Possible undetected data corruption or multiple node warmstarts if a Traditional FlashCopy Clone of a volume is created before adding Volume Group Snapshots to the volume
(show details)
Symptom |
Data Integrity Loss |
Environment |
Any system running code capable of creating FlashCopy Volume Group Snapshots and Traditional FlashCopy Clones |
Trigger |
If a Traditional FlashCopy Clone is taken of a production volume before Volume Group Snapshots are added to that production volume, it is possible that data in the original Traditional Clone will become corrupt |
Workaround |
To avoid this defect, users can use Volume Group Snapshots. If users need to access the data in these Volume Group Snapshots, the user can create a Volume Group Clone of the Volume Group Snapshot instead of creating a Traditional FlashCopy Clone of the production volume |
|
8.5.4.0 |
FlashCopy |
HU02586 |
All |
Critical
|
When deleting a safeguarded copy volume which is related to a restore operation and another related volume is offline, the system may warmstart repeatedly
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running safeguarded copy |
Trigger |
Deletion of a safeguarded volume while a restore is in operation |
Workaround |
Return any related offline volumes online |
|
8.5.4.0 |
Safeguarded Copy & Safeguarded Snapshots |
SVAPAR-87729 |
All |
Critical
|
After a system has logged '3201 : Unable to send to the cloud callhome servers', the system may end up with an inconsistency in the Event Log. This inconsistency can cause a number of symptoms, including node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Any system running V8.4.1,V8.4.2,V8.5.0,V8.5.1,V8.5.2,V8.5.3 |
Trigger |
Cloud callhome errors |
Workaround |
None |
|
8.5.4.0 |
Call Home |
SVAPAR-87846 |
All |
Critical
|
Node warmstarts with unusual workload pattern on volumes with Policy-based replication
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Any system running V8.5.2 or V8.5.3 and has Policy-based replication configured |
Trigger |
Unusual workload pattern on volumes with replication policy. Specifically, if a large number (tens of thousands) of writes to the same 128k grain on a single VDisk is submitted in a short amount of time |
Workaround |
Stop the workload pattern, disable replication, or force replication into cycling mode on the affected volume groups |
|
8.5.4.0 |
Policy-based Replication |
SVAPAR-91111 |
All |
Critical
|
USB devices connected to an FS5035 node may be formatted on upgrade to 8.5.3 software
(show details)
Symptom |
Loss of Access to Data |
Environment |
FS5035 with a USB stick connected |
Trigger |
Upgrading to 8.5.3 |
Workaround |
Remove USB device before upgrading to 8.5.3 |
|
8.5.4.0 |
Encryption |
HU01782 |
All |
High Importance
|
A node warmstart may occur due to a potentially bad SAS hardware component on the system such as a SAS cable, SAS expander or SAS HIC
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems that have a faulty SAS hardware component |
Trigger |
Faulty SAS component |
Workaround |
None |
|
8.5.4.0 |
Drives |
HU02539 |
All |
High Importance
|
If an IP address is moved to a different port on a node, the old routing table entries do not get refreshed. Therefore, the IP address maybe inaccessible through the new port
(show details)
Symptom |
None |
Environment |
All |
Trigger |
Moving an IP address to a different port on a node |
Workaround |
Either reboot the node or assign an IP address that has not been used on the node since it was last rebooted |
|
8.5.4.0 |
|
HU02558 |
FS5000, FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500, SVC |
High Importance
|
A timing window exists if a node encounters repeated timeouts on I/O compression requests. This can cause two threads to conflict with each other, thereby causing a deadlock condition to occur.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.2 and later, v8.3 and later, v8.4 and later, or v8.5 and later |
Trigger |
None |
Workaround |
None. When the node detects the deadlock condition, it warmstarts in order to clear the issue |
|
8.5.4.0 |
Compression |
HU02589 |
FS5200, FS7200, FS9100, FS9200, FS9500 |
High Importance
|
Reducing the expiration date of snapshots can cause volume creation and deletion to stall
(show details)
Symptom |
None |
Environment |
Systems running V8.4.2,V8.5.0,V8.5.1,V8.5.2 |
Trigger |
Reducing the expiration date of snapshots |
Workaround |
Before the oldest snapshot of the new policy is about to be deleted, manually delete some of the older snapshots from the previous policy to prevent deletion overlap |
|
8.5.4.0 |
FlashCopy, Policy-based Replication, Safeguarded Copy & Safeguarded Snapshots |
SVAPAR-83290 |
FS5000 |
High Importance
|
An issue with the Trusted Platform Module (TPM) in FlashSystem 50xx nodes may cause the TPM to become unresponsive. This can happen after a number of weeks of continuous runtime.
(show details)
Symptom |
Single Node Warmstart |
Environment |
FS50xx platforms running either V8.4.0,V8.4.1,V8.4.2,V8.5.0,V8.5.1,V8.5.2,V8.5.3 |
Trigger |
Unresponsive TPM |
Workaround |
Reboot each node in turn. Wait 30 minutes between the two nodes in an I/O group, to allow hosts to failover. Check there are no volumes dependent on the second node before proceeding with the reboot. After all nodes have been rebooted, retry the configuration action, which should now complete successfully. |
|
8.5.4.0 |
|
SVAPAR-84305 |
All |
High Importance
|
A node may warmstart when attempting to run 'chsnmpserver -community' command without any additional parameter
(show details)
Symptom |
Loss of Access to Data |
Environment |
Any platform running V8.4.0 |
Trigger |
None |
Workaround |
Use an additional parameter with the 'chsnmpserver -community' command |
|
8.5.4.0 |
System Monitoring |
SVAPAR-85093 |
All |
High Importance
|
Systems that are using Policy-Based Replication may experience node warmstarts, if host I/O consists of large write I/Os with a high queue depth
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems with Policy-Based Replication and heavy large write I/O workloads |
Trigger |
Systems configured to use Policy-Based Replication and have host I/O consisting of large write I/Os that have a high queue depth |
Workaround |
None |
|
8.5.4.0 |
Policy-based Replication |
SVAPAR-85396 |
FS5000, FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500 |
High Importance
|
Replacement Samsung NVME drives may show as unsupported, or they may fail during a firmware upgrade as unsupported, due to a VPD read problem
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running V8.5.3 with industry standard NVMe drives |
Trigger |
Drive firmware update or drive replacements |
Workaround |
Manually power cycling the slot of the failed drive often helps |
|
8.5.4.0 |
Drives |
SVAPAR-89780 |
All |
High Importance
|
A node may warmstart after running the flashcopy command 'stopfcconsistgrp' due to the flashcopy maps in the consistency group being in an invalid state
(show details)
Symptom |
Single Node Warmstart |
Environment |
Any system running Flashcopy |
Trigger |
Running the 'stopfcconsistgrp' command, when the flashcopy maps are in an invalid state |
Workaround |
None |
|
8.5.4.0 |
FlashCopy |
SVAPAR-89951 |
All |
High Importance
|
A single node warmstart might occur when a volume group with a replication policy switches the replication to cycling mode.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems with policy-based replication |
Trigger |
Replication is switching from journaling to cycling mode |
Workaround |
Contact IBM support for an action plan to force replication into cycling mode permanently |
|
8.5.4.0 |
Policy-based Replication |
SVAPAR-90395 |
FS9500, SVC |
High Importance
|
FS9500 and SV3 might suffer from poor Remote Copy performance due to a lack of internal messaging resources
(show details)
Symptom |
Performance |
Environment |
FS9500 or SV3 systems with Remote Copy, typically running Hyperswap, Metro Mirror, Global Mirror and GMCV |
Trigger |
Not enough resources available for Remote Copy |
Workaround |
None |
|
8.5.4.0 |
Global Mirror, Global Mirror With Change Volumes, HyperSwap, Metro Mirror |
HU02594 |
All |
Suggested
|
Initiating drive firmware update via management user interface for one drive class can prompt all drives to be updated
(show details)
Symptom |
None |
Environment |
Any system running V8.4.2,V8.5.0,V8.5.1,V8.5.2,V8.5.3 |
Trigger |
None |
Workaround |
None |
|
8.5.4.0 |
Drives, System Update |
SVAPAR-89296 |
All |
Suggested
|
Immediately after upgrade from pre-8.4.0 to 8.4.0 or later, EasyTier may stop promoting hot data to the tier0_flash tier if it contains non-FCM storage. This issue will automatically resolve on the next upgrade
(show details)
Symptom |
Performance |
Environment |
Multi-tier pools where tier0_flash contains non-FCM storage |
Trigger |
Upgrading from pre-8.4.0 to 8.4.0 |
Workaround |
Upgrade to any later version of software, or warmstart the config node |
|
8.5.4.0 |
EasyTier |
SVAPAR-89781 |
All |
Suggested
|
The 'lsportstats' command does not work via the REST API until code level 8.5.4.0
(show details)
Symptom |
Configuration |
Environment |
Any system running 8.4.0 or higher |
Trigger |
None |
Workaround |
None |
|
8.5.4.0 |
|