HU02467 |
All |
Critical
|
When one node disappears from the cluster the surviving node can be unable to achieve quorum allegiance in a timely manner causing it to lease expire
(show details)
Symptom |
Loss of Access to Data |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.3.1.9 |
Quorum |
HU02471 |
All |
Critical
|
After starting a FlashCopy map with -restore in a graph with a GMCV secondary disk that was stopped with -access there can be a data integrity issue
(show details)
Symptom |
Data Integrity Loss |
Environment |
Systems using GMCV |
Trigger |
Given a configuration where there is a GMCV secondary volume A that has a cascade of two FlashCopy maps to volumes B and C (A -> B -> C) The mapping B -> C has been started and GMCV is running Stop the GMCV relationship with -access Start FlashCopy map A -> B with -restore Any I/O to volume A will corrupt data on volume C |
Workaround |
Wait for a GMCV backward map to complete before starting related FlashCopy maps with -restore |
|
8.3.1.9 |
FlashCopy, Global Mirror With Change Volumes |
HU02561 |
All |
Critical
|
If there are a high number of FC mappings sharing the same target, the internal array that is used to track the FC mapping is mishandled, thereby causing it to overrun. This will cause a cluster wide warmstart to occur
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
System running v8.3.1,v8.4.0,v8.4.1,v8.4.2,v8.5.0 |
Trigger |
Cascaded flashcopy mappings and one of the flashcopy target volumes is the source of 255 flashcopy mappings |
Workaround |
None |
|
8.3.1.9 |
FlashCopy |
IT41088 |
FS5000, FS5100, V5000, V5100 |
Critical
|
Systems with low memory that have a large number of RAID arrays that are resyncing can cause a system to run out of RAID rebuild control blocks
(show details)
Symptom |
Loss of Access to Data |
Environment |
Low memory systems such as 5015/5035 |
Trigger |
Systems with 64gb or less of cache with resync operations spread across multiple RAID arrays |
Workaround |
None |
|
8.3.1.9 |
RAID |
HU02010 |
All |
High Importance
|
A single node warmstart may occur when a drive in a non-distributed RAID array is taken temporarily out-of-sync due to slow performance
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems with non-distributed RAID arrays |
Trigger |
Slow drive |
Workaround |
None |
|
8.3.1.9 |
RAID |
HU02485 |
All |
High Importance
|
Reoccurring node warmstarts on systems with DRP that have been upgraded to 8.3.1.7 or 8.3.1.8
(show details)
Symptom |
Single Node Warmstart |
Environment |
Any system running 8.3.1.7 or 8.3.1.8 |
Trigger |
None |
Workaround |
None |
|
8.3.1.9 |
Data Reduction Pools, System Update |
IT41835 |
All |
High Importance
|
A T2 recovery may occur when a failed drive in the system is replaced with an unsupported drive type
(show details)
Symptom |
Loss of Access to Data |
Environment |
System that have drives reporting as UNSUPPORTED |
Trigger |
A drive with a tech type as UNSUPPORTED will cause this T2 during drive replacement. |
Workaround |
The system should recover automatically. To prevent the issue in the future, make sure system supported drive is used during replacement |
|
8.3.1.9 |
Drives |
HU02306 |
All |
Suggested
|
An offline host port can still be shown as active in lsfabric and the associated host can be shown as online despite being offline
(show details)
Symptom |
None |
Environment |
Systems running v8.3 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.9 |
Hosts |
HU02364 |
All |
Suggested
|
False 989001 Managed Disk Group space warnings can be generated
(show details)
Symptom |
None |
Environment |
Systems running v8.3.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.9 |
System Monitoring |
HU02367 |
All |
Suggested
|
An issue with how RAID handles drive failures may lead to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.3.1.9 |
RAID |
HU02372 |
FS9100, SVC, V5000, V5100, V7000 |
Suggested
|
Host SAS port 4 is missing from the GUI view on some systems.
(show details)
Symptom |
None |
Environment |
Any system that has SAS ports |
Trigger |
None |
Workaround |
Run the command lsportsas to view all ports |
|
8.3.1.9 |
Graphical User Interface |
HU02391 |
All |
Suggested
|
An issue with how websockets connections are handled can cause the GUI to become unresponsive requiring a restart of the Tomcat server
(show details)
Symptom |
None |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.3.1.9 |
Graphical User Interface |
HU02443 |
All |
Suggested
|
An inefficiency in the RAID code that processes requests to free memory can cause the request to timeout leading to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.2.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.9 |
RAID |
HU02453 |
All |
Suggested
|
It may not be possible to connect to GUI or CLI without a restart of the Tomcat server
(show details)
Symptom |
None |
Environment |
Systems running v8.4 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.9 |
Command Line Interface, Graphical User Interface |
HU02474 |
All |
Suggested
|
An SFP failure can cause a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.4 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.9 |
Reliability Availability Serviceability |
HU02499 |
All |
Suggested
|
A pop up with the message saying 'The server was unable to process the request' may occur due to an invalid time stamp in the file used to provide the pop up reminder
(show details)
Symptom |
None |
Environment |
Code levels V8.3.1,V8.4.0,V8.4.1,V8.4.2,V8.5.0,V8.5.1 |
Trigger |
None |
Workaround |
None |
|
8.3.1.9 |
Graphical User Interface |
HU02564 |
All |
Suggested
|
The 'charraymember' command fails with a degraded DRAID array, even though the syntax of the command is correct
(show details)
Symptom |
None |
Environment |
All |
Trigger |
Performing 'charraymember' command on a degraded DRAID array |
Workaround |
Use the '-immediate' option when running the 'charraymember' command |
|
8.3.1.9 |
Distributed RAID |
HU02593 |
All |
Suggested
|
NVMe drive is incorrectly reporting end of life due to flash degradation
(show details)
Symptom |
Error in Error Log |
Environment |
V8.2,V8.2.1,V8.3.0,V8.3.1 |
Trigger |
None |
Workaround |
None |
|
8.3.1.9 |
Drives |
HU02409 |
All |
Critical
|
If rmhost command, with -force, is executed for a MS Windows server then an issue in the iSCSI driver can cause the relevant target initiator to become unresponsive
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems with iSCSI connected hosts |
Trigger |
Run rmhost -force CLI command for an iSCSI connected MS Windows host |
Workaround |
Do not use -force when removing a host object |
|
8.3.1.7 |
Hosts, iSCSI |
HU02410 |
SVC |
Critical
|
A timing window issue in the transition to a spare node can cause a cluster-wide Tier 2 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
SVC systems running v8.1 or later with Hot Spare Nodes |
Trigger |
None |
Workaround |
None |
|
8.3.1.7 |
Hot Spare Node |
HU02455 |
All |
Critical
|
After converting a system from 3-site to 2-site a timing window issue can trigger a cluster tier 2 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.3 or later in a 3-site topology |
Trigger |
None |
Workaround |
None |
|
8.3.1.7 |
3-Site using HyperSwap or Metro Mirror |
HU02343 |
All |
High Importance
|
For Huawei Dorado V3 Series backend controllers it is possible that not all available target ports will be utilized. This would reduce the potential IO throughput and can cause high read/write backend queue time on the cluster impacting front end latency for hosts
(show details)
Symptom |
Performance |
Environment |
Systems using Huawei Dorado V3 Series backend controllers |
Trigger |
None |
Workaround |
None |
|
8.3.1.7 |
Backend Storage |
HU02460 |
All |
High Importance
|
Multiple node warmstarts triggered by ports on the 32G fibre channel adapter failing
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v8.3 or later using 32Gbps HBAs |
Trigger |
None |
Workaround |
None |
|
8.3.1.7 |
Hosts |
HU02466 |
All |
High Importance
|
An issue in the handling of drive failures can result in multiple node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running v8.3.0 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.7 |
RAID |
HU01209 |
All |
Suggested
|
It is possible for the Fibre Channel driver to be offered an unsupported length of data resulting in a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Fibre Channel connectivity |
Trigger |
None |
Workaround |
None |
|
8.3.1.7 |
Storage Virtualisation |
HU02433 |
FS5000, FS5100, SVC, V5000, V5100, V7000 |
Suggested
|
When a BIOS upgrade occurs excessive tracefile entries can be generated
(show details)
Symptom |
None |
Environment |
Gen 1 & 2 systems |
Trigger |
None |
Workaround |
None |
|
8.3.1.7 |
System Update |
HU02451 |
All |
Suggested
|
An incorrect IP Quorum lease extension setting can lead to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.3.0 or later using IP Quorum |
Trigger |
None |
Workaround |
None |
|
8.3.1.7 |
IP Quorum |
IT33996 |
All |
Suggested
|
An issue in RAID where unreserved resources fail to be freed up can result in a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.3.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.7 |
RAID |
HU02296 |
All |
HIPER
|
The zero page functionality can become corrupt causing a volume to be initialised with non-zero data
(show details)
Symptom |
Data Integrity Loss |
Environment |
Systems running v8.2.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.6 |
Storage Virtualisation |
HU02327 |
All |
HIPER
|
Using addvdiskcopy in conjunction with expandvdisk with format may result in the original being overwritten, by the new copy, producing blank copies. For more details refer to this Flash
(show details)
Symptom |
Data Integrity Loss |
Environment |
Systems running v8.2.1 or later |
Trigger |
Using addvdiskcopy and expandvdisk with format |
Workaround |
Wait until the format is completed before adding a copy |
|
8.3.1.6 |
Volume Mirroring |
HU02384 |
SVC |
HIPER
|
An inter-node message queue can become stalled, leading to an I/O timeout warmstart, and temporary loss of access
(show details)
Symptom |
Offline Volumes |
Environment |
SVC systems using SV1 model nodes running v8.3.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.6 |
Reliability Availability Serviceability |
HU02400 |
All |
HIPER
|
A problem in the virtualization component of the system can cause a migration IO to be submitted in an incorrect context resulting in a node warmstart. In some cases it is possible that this IO has been submitted to an incorrect location on the backend, which can cause data corruption of an isolated small area
(show details)
Symptom |
Data Integrity Loss |
Environment |
Systems running v8.2.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.6 |
Storage Virtualisation |
HU02418 |
All |
HIPER
|
During a DRAID array rebuild data can be written to an incorrect location. For more details refer to this Flash
(show details)
Symptom |
Data Integrity Loss |
Environment |
Systems running v8.3.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.6 |
Distributed RAID, RAID |
DT112601 |
All |
Critical
|
Deleting image mode mounted source volume while migration is ongoing could trigger Tier 2 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.3.1 or later |
Trigger |
Delete the source volume when migration progress is showing 0% |
Workaround |
Wait for lsmigrate progress to report a non-zero progress value before issuing a volume delete |
|
8.3.1.6 |
Storage Virtualisation |
HU02226 |
All |
Critical
|
Due to an issue in DRP a node can repeatedly warmstart whilst rejoining a cluster
(show details)
Symptom |
Offline Volumes |
Environment |
Systems running v8.3.1 or later using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.3.1.6 |
Data Reduction Pools |
HU02342 |
All |
Critical
|
Occasionally when an offline drive returns to online state later than its peers in the same RAID array there can be multiple node warmstarts that send nodes into a service state
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.8 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.6 |
RAID |
HU02373 |
All |
Critical
|
An incorrect compression flag in metadata can take a DRP offline
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.3.1 or later using Data Reduction Pools and Remote Copy |
Trigger |
None |
Workaround |
None |
|
8.3.1.6 |
Data Reduction Pools |
HU02393 |
All |
Critical
|
Automatic resize of compressed/thin volumes may fail causing warmstarts on both nodes in an I/O group
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.6 |
Storage Virtualisation |
HU02397 |
All |
Critical
|
A Data Reduction Pool, with deduplication enabled, can retain some stale state after deletion and recreation. This has no immediate effect. However if later on a node goes offline this condition can cause the pool to be taken offline
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.1.3 or later using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.3.1.6 |
Data Reduction Pools |
HU02401 |
All |
Critical
|
EasyTier can move extents between identical mdisks until one runs out of space
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.1 or later using EasyTier |
Trigger |
None |
Workaround |
Disable EasyTier. Manually migrate extents between mdisks |
|
8.3.1.6 |
EasyTier |
HU02406 |
All |
Critical
|
An interoperability issue between Cisco NX-OS firmware and the Spectrum Virtualize Fibre Channel driver can cause a node warmstart on NPIV failback (for example during an upgrade) with the potential for a loss of access. For more details refer to this Flash
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using NPIV that are connected to Cisco SAN equipment running NX-OS 8.4(2c) or later |
Trigger |
Initiate an NPIV failback operation by, for example, performing an upgrade |
Workaround |
Disable NPIV (which will require any hot spare nodes to be removed first) |
|
8.3.1.6 |
Interoperability |
HU02414 |
All |
Critical
|
Under specific sequence and timing of circumstances the garbage collection process can timeout and take a pool offline temporarily
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.3 or later using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.3.1.6 |
Data Reduction Pools |
HU02429 |
All |
Critical
|
System can go offline shortly after changing the SMTP settings using the chemailserver command via the GUI
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.8.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.6 |
System Monitoring |
HU02326 |
SVC |
High Importance
|
Delays in passing messages between nodes in an I/O group can adversely impact write performance
(show details)
Symptom |
Performance |
Environment |
SVC systems running v8.2.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.6 |
Performance |
HU02345 |
All |
High Importance
|
When connectivity to nodes in a local or remote cluster is lost, inflight IO can become stuck in an aborting state, consuming system resources and potentially adversely impacting performance
(show details)
Symptom |
Performance |
Environment |
Systems using remote copy |
Trigger |
None |
Workaround |
None |
|
8.3.1.6 |
HyperSwap, Metro Mirror |
HU02362 |
FS5100, FS7200, FS9100, FS9200, SVC, V5100, V7000 |
High Importance
|
When the RAID scrub process encounters bad grains, the peak response time for reads and writes can be adversely impacted
(show details)
Symptom |
Performance |
Environment |
Systems running v8.2 or later using NVMe drives |
Trigger |
None |
Workaround |
None |
|
8.3.1.6 |
RAID |
HU02376 |
All |
High Importance
|
FlashCopy maps may get stuck at 99% due to inconsistent metadata accounting between nodes
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running v8.2.1 or later using FlashCopy |
Trigger |
None |
Workaround |
None |
|
8.3.1.6 |
FlashCopy |
HU02377 |
All |
High Importance
|
A race condition in DRP may stop IO being processed leading to timeouts
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running v8.3.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.6 |
Data Reduction Pools |
HU02392 |
All |
High Importance
|
Validation in the Upload Support Package feature will reject new case number formats in the PMR field
(show details)
Symptom |
None |
Environment |
All systems |
Trigger |
None |
Workaround |
At v8.4.0.0 or later use the CLI command satask supportupload -pmr pmr_number -filename fullpath/filename |
|
8.3.1.6 |
Support Data Collection |
HU02422 |
All |
High Importance
|
GUI performance can be degraded when displaying large numbers of volumes or other objects
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running v8.3.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.6 |
Graphical User Interface |
IT36792 |
All |
High Importance
|
EasyTier can select a default performance profile for a drive which could cause too much hot data to be moved to lower tiers
(show details)
Symptom |
Performance |
Environment |
Systems running v8.2.1 or later using EasyTier |
Trigger |
None |
Workaround |
None |
|
8.3.1.6 |
EasyTier |
IT38015 |
All |
High Importance
|
During RAID rebuild or copyback on systems with 16gb or less of memory, cache handling can lead to a deadlock which results in timeouts
(show details)
Symptom |
Performance |
Environment |
Systems with 16GB or less of memory |
Trigger |
None |
Workaround |
None |
|
8.3.1.6 |
RAID |
HU02331 |
All |
Suggested
|
Due to a threshold issue an error code 3400 may appear too often in the event log
(show details)
Symptom |
None |
Environment |
Systems using compression cards |
Trigger |
None |
Workaround |
None |
|
8.3.1.6 |
Compression |
HU02332 & HU02336 |
All |
Suggested
|
When an I/O is received, from a host, with invalid or inconsistent SCSI data but a good checksum it may cause a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.8 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.6 |
Hosts |
HU02366 |
All |
Suggested
|
Slow internal resource reclamation by the RAID component can cause a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.2.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.6 |
RAID |
HU02375 |
All |
Suggested
|
An issue in how the GUI handles volume data can adversely impact its responsiveness
(show details)
Symptom |
Performance |
Environment |
Systems running v8.3.1 or later with large numbers of volumes |
Trigger |
None |
Workaround |
None |
|
8.3.1.6 |
Graphical User Interface |
HU02399 |
SVC |
Suggested
|
Boot drives may be reported as having invalid state by the GUI, even though they are online
(show details)
Symptom |
None |
Environment |
Systems running v8.3.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.6 |
Graphical User Interface |
HU02419 |
All |
Suggested
|
During creation of a drive FRU id the resulting unique number can contain a space character which can lead to CLI commands, that return this value, presenting it as a truncated string
(show details)
Symptom |
None |
Environment |
Systems running v8.3.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.6 |
Command Line Interface, Drives |
HU02424 |
All |
Suggested
|
Frequent GUI refreshing adversely impacts usability on some screens
(show details)
Symptom |
None |
Environment |
Systems running v8.3.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.6 |
Graphical User Interface |
HU02425 |
All |
Suggested
|
An issue in the handling of internal messages, when the system has a high IO workload to two or more different FlashCopy maps in the same dependency chain, can result in incorrect counters. The node will warmstart to clear this condition.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using FlashCopy |
Trigger |
None |
Workaround |
None |
|
8.3.1.6 |
FlashCopy |
HU02186 |
FS5100, FS7200, FS9100, FS9200, V5100, V7000 |
HIPER
|
NVMe drive pulls or firmware upgrades may lead to offline pools with the possibility of a small loss of data integrity. For more details refer to this Flash
(show details)
Symptom |
Data Integrity Loss |
Environment |
Systems running v8.2.1, or later with NVMe drives |
Trigger |
None |
Workaround |
None |
|
8.3.1.5 |
RAID |
HU02360 |
All |
High Importance
|
Cloud Callhome may stop working and provide no indication of this in the event log. For more details refer to this Flash
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running v8.3 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.5 |
System Monitoring |
HU02186 (reverted) |
FS5100, FS7200, FS9100, FS9200, V5100, V7000 |
HIPER
|
This APAR has been reverted at this PTF. This APAR will be re-applied in a future PTF
|
8.3.1.4 |
RAID |
HU02261 |
All |
HIPER
|
A Data Reduction Pool may be taken offline when metadata is detected to hold an invalid compression flag. For more details refer to this Flash
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.1 or later using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.3.1.4 |
Data Reduction Pools |
HU02313 |
FS5100, FS7200, FS9100, FS9200, V5100, V7000 |
HIPER
|
When a FlashCore Module (FCM) fails there is a chance that this can trigger other FCMs in the same control enclosure to also fail. If enough additional drives fail, at the same time, this can take the array offline and cause a loss of access to data. For more details refer to this Flash
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.2 or later using Flash Core Modules |
Trigger |
None |
Workaround |
None |
|
8.3.1.4 |
Drives |
HU02338 |
All |
HIPER
|
An issue in the setting up of reverse FlashCopy mappings can cause the background copy to finish prematurely providing an incomplete target image
(show details)
Symptom |
Data Integrity Loss |
Environment |
Systems using FlashCopy |
Trigger |
None |
Workaround |
None |
|
8.3.1.4 |
FlashCopy |
HU02340 |
All |
HIPER
|
High replication workloads can cause multiple warmstarts with a loss of access at the partner cluster
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.3.1 or later using IP Replication |
Trigger |
None |
Workaround |
None |
|
8.3.1.4 |
IP Replication |
IT35555 |
V5000 |
HIPER
|
Storwize V5030 systems running v8.3.1.3 may experience an offline pool under heavy I/O workloads
(show details)
Symptom |
Loss of Access to Data |
Environment |
Storwize V5030 systems running v8.3.1.3 |
Trigger |
None |
Workaround |
None |
|
8.3.1.4 |
Drives |
HU02282 |
All |
Critical
|
After a code upgrade the config node may exhibit high write response times. In exceptionally rare circumstances an Mdisk group may be taken offline
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.2.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.4 |
Cache |
HU02314 |
FS5100, FS7200, FS9100, FS9200, V5100, V7000 |
Critical
|
Due to a RAID issue when a bad block is detected on a NVMe drive there may be multiple node warmstarts with a possibility of a loss of access to data
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.2 or later using NVMe drives |
Trigger |
None |
Workaround |
None |
|
8.3.1.4 |
Drives |
HU02315 |
All |
Critical
|
Failover for VMware iSER hosts may pause I/O for more than 120 seconds
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems presenting volumes to VMware iSER hosts |
Trigger |
None |
Workaround |
None |
|
8.3.1.4 |
Hosts |
HU02321 |
All |
Critical
|
Where nodes relying on RDMA clustering alone, if a node is removed, warmstarts or goes down for upgrade there may be a delay in internode communication leading to lease expiries
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using iSER RDMA clustering with iSCSI hosts |
Trigger |
None |
Workaround |
When upgrading, add an alternative, non-iSER medium for node to node communications |
|
8.3.1.4 |
iSCSI |
HU02322 |
All |
Critical
|
A deadlock condition in the Data Reduction Pool function may cause multiple node warmstarts and a temporary loss of access to data
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.3 or later using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.3.1.4 |
Data Reduction Pools |
HU02323 |
All |
Critical
|
Stalled I/O during DRAID expansion can cause node warmstarts and a temporary loss of access to data
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.3.1 or later using DRAID |
Trigger |
None |
Workaround |
None |
|
8.3.1.4 |
Distributed RAID |
HU02153 |
All |
High Importance
|
Fabric or host issues can cause aborted IOs to block the port throttle queue leading to adverse performance that is cleared by a node warmstart
(show details)
Symptom |
Performance |
Environment |
Systems running v8.2.1, or later, with 8Gb FC adapters |
Trigger |
None |
Workaround |
None |
|
8.3.1.4 |
Hosts |
HU02227 |
FS7200, FS9100, FS9200, SVC, V5100, V7000 |
High Importance
|
Certain I/O patterns can cause compression hardware to post errors. When those errors exceed a threshold the node can be taken offline
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running v8.2 or later using compressed volumes |
Trigger |
None |
Workaround |
None |
|
8.3.1.4 |
Compression |
HU02311 |
All |
High Importance
|
An issue in volume copy flushing may lead to higher than expected write cache delays
(show details)
Symptom |
Performance |
Environment |
Systems running v8.3.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.4 |
Cache |
HU02317 |
All |
High Importance
|
A DRAID expansion can stall shortly after it is initiated
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running v8.3.1 or later using DRAID |
Trigger |
None |
Workaround |
None |
|
8.3.1.4 |
Distributed RAID |
HU02095 |
All |
Suggested
|
The effective_used_capacity field of lsarray/lsmdisk commands should be empty for RAID arrays which do not contain overprovisioned drives. However, sometimes this field can be zero even though it should be empty. This can cause incorrect provisioned capacity reporting in the GUI
(show details)
Symptom |
None |
Environment |
Systems with non-FCM arrays |
Trigger |
None |
Workaround |
None |
|
8.3.1.4 |
Graphical User Interface |
HU02280 |
All |
Suggested
|
Spectrum Control or Storage Insights may be unable to collect stats after a Tier 2 recovery or system powerdown
(show details)
Symptom |
None |
Environment |
Systems running v8.3.1.2 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.4 |
System Monitoring |
HU02292 & HU02308 |
All |
Suggested
|
The use of maximum replication delay within Global Mirror may occasionally cause a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Remote Copy |
Trigger |
None |
Workaround |
None |
|
8.3.1.4 |
Global Mirror |
HU02277 |
All |
HIPER
|
RAID parity scrubbing can become stalled causing an accumulation of media errors leading to multiple drive failures with the possibility of data integrity loss. For more details refer to this Flash
(show details)
Symptom |
Data Integrity Loss |
Environment |
Systems with model MZILS3T8HMLH read intensive SSDs at drive firmware MS24 are particularly susceptible to the data integrity (DI) issue. Other drive types may see multiple failures without DI issue |
Trigger |
None |
Workaround |
None |
|
8.3.1.3 |
RAID |
HU02058 |
All |
Critical
|
Changing a remote copy relationship from GMCV to MM or GM can result in a Tier 2 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using remote copy |
Trigger |
None |
Workaround |
None |
|
8.3.1.3 |
Global Mirror, Global Mirror With Change Volumes, Metro Mirror |
HU02162 |
All |
Critical
|
When a node warmstart occurs during an upgrade from v8.3.0.0, or earlier, to 8.3.0.1, or later, with dedup enabled it can lead to repeated node warmstarts across the cluster necessitating a Tier 3 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.1.3 or later using deduplication |
Trigger |
Upgrade from v8.3.0.1, or earlier, to 8.3.0.1, or later, with deduplication enabled |
Workaround |
Upgrade to an ifix provided by Support |
|
8.3.1.3 |
Data Reduction Pools |
HU02180 |
All |
Critical
|
When a svctask restorefcmap command is run on a VVol that is the target of another FlashCopy mapping both nodes in an I/O group may warmstart
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.2.1 or v8.3.x that are using VVols |
Trigger |
Run a svctask restorefcmap command where the volume being restored is also the target volume of another FlashCopy mapping |
Workaround |
None |
|
8.3.1.3 |
vVols |
HU02184 |
All |
Critical
|
When a 3PAR controller experiences a fault that prevents normal I/O processing it may issue a SCSI TARGET RESET command. This command is not supported and may cause multiple node asserts, possibly cluster-wide
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems with 3PAR backend controllers |
Trigger |
None |
Workaround |
None |
|
8.3.1.3 |
Backend Storage |
HU02196 & HU02253 |
All |
Critical
|
A particular sequence of internode messaging delays can lead to a cluster wide lease expiry
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.8 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.3 |
Reliability Availability Serviceability |
HU02210 |
All |
Critical
|
There is a very small timing window where a volume may be reported as offline, to a host, during its conversion from a regular volume to a HyperSwap volume
(show details)
Symptom |
Offline Volumes |
Environment |
Systems moving to a HyperSwap topology that have used Remote Copy |
Trigger |
None |
Workaround |
Individual volume copies of a HyperSwap volume can be converted between fully allocated, thin and compressed using the normal capacity savings conversion method by using addvdiskcopy. Then once the sync has completed the vdisk copy that is no longer needed can be deleted. This method avoids the need for removing a HyperSwap volume copy and hence the need to add a HyperSwap volume copy. NOTE: Please pay particular attention to the distinction between a vdisk copy and a volume copy. |
|
8.3.1.3 |
HyperSwap |
HU02213 |
SVC |
Critical
|
A Hot Spare Node (HSN) timing window issue can, during an HSN activation or deactivation, cause the cluster to broadcast an invalid VPD update to other clusters on the SAN. This may trigger a Tier 2 recovery on the other cluster. For more details refer to this Flash
(show details)
Symptom |
Loss of Access to Data |
Environment |
SVC systems, with Hot Spare Nodes, using remote copy partnerships |
Trigger |
None |
Workaround |
Prior to an upgrade, or node hardware maintenance, remove the HSN |
|
8.3.1.3 |
Hot Spare Node |
HU02230 |
FS7200, FS9100, FS9200, V7000 |
Critical
|
For IBM Flash Core Modules a change of state, from unused to candidate, can lead to a Tier 2 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using IBM Flash Core Modules |
Trigger |
None |
Workaround |
None |
|
8.3.1.3 |
Drives |
HU02262 |
SVC, V5000, V7000 |
Critical
|
Entering the CLI applydrivesoftware -cancel command may result in cluster-wide warmstarts
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.3 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.3 |
Drives |
HU02266 |
All |
Critical
|
An issue in auto-expand can cause expansion to fail and the volume to be taken offline
(show details)
Symptom |
Offline Volumes |
Environment |
Systems running v8.2.1 or later using thin-provisioning |
Trigger |
None |
Workaround |
None |
|
8.3.1.3 |
Thin Provisioning |
HU02289 |
FS9200, SVC |
Critical
|
An issue with internal resource allocation in high-end systems, with 1000s of mirror copies, may cause multiple warmstarts with the possibility of a loss of access
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems with a high number of cores (e.g. SVC SV2 models & FS9200) |
Trigger |
None |
Workaround |
None |
|
8.3.1.3 |
Volume Mirroring |
HU02295 |
SVC |
Critical
|
When upgrading from v8.2.1 or v8.3, in the presence of hot spare nodes, an issue with the handling of node metadata may cause a Tier 2 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
SVC systems running v8.2.1 or v8.3 with Hot Spare Node |
Trigger |
None |
Workaround |
None |
|
8.3.1.3 |
System Update |
HU02390 |
All |
Critical
|
A memory handling issue in the REST API may cause an out-of-memory condition when listing a large number of volumes
(show details)
Symptom |
None |
Environment |
Systems running v8.1.3 or later |
Trigger |
Use REST API to issue an lsvdisk when there are >10K volumes on the system or issue multiple lsvdisk commands concurrently for >2000 volumes |
Workaround |
None |
|
8.3.1.3 |
REST API |
HU02156 |
All |
High Importance
|
Global Mirror environments may experience more frequent 1920 events due to writedone message queuing
(show details)
Symptom |
Performance |
Environment |
Systems using Global Mirror |
Trigger |
None |
Workaround |
None |
|
8.3.1.3 |
Global Mirror |
HU02164 |
All |
High Importance
|
An issue in Remote Copy may cause a loss of hardened data when a node is warmstarted
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems using remote copy |
Trigger |
None |
Workaround |
None |
|
8.3.1.3 |
Global Mirror, Global Mirror With Change Volumes, Metro Mirror |
HU02194 |
All |
High Importance
|
Password reset via USB drive does not work as expected and user is not able to login to Management or Service assistant GUI with the new password
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running v8.2 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.3 |
Reliability Availability Serviceability |
HU02201 & HU02221 |
All |
High Importance
|
Shortly after upgrading drive firmware, specific drive models can fail due to Too many long IOs to drive for too long errors
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems with the following drive models:
- ST300MM0009 (300GB) - B5B8
- ST600MM0009 (600GB) - B5B8
- ST900MM0009 (900GB) - B5B8
- ST1200MM0009 (1200GB) - B5B8
- ST1200MM0129 (1800GB) - B5C9
- ST2400MM0129 (2400GB) - B5C9
- ST300MP0006 (300GB) - B6AA
- ST600MP0006 (600GB) - B6AA
- ST900MP0146 (900GB) - B6CB
|
Trigger |
None |
Workaround |
None |
|
8.3.1.3 |
Drives |
HU02248 |
All |
High Importance
|
After upgrade the system may be unable to perform LDAP authentication
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems with GUI logins via LDAP |
Trigger |
None |
Workaround |
None |
|
8.3.1.3 |
LDAP |
II14767 |
SVC |
High Importance
|
An issue with how cache handles ownership of volumes across multiple sites can lead to cross-site destage, adversely impacting write latency. For more details refer to this Flash
(show details)
Symptom |
Performance |
Environment |
SVC systems in a Stretched Cluster configuration |
Trigger |
None |
Workaround |
None |
|
8.3.1.3 |
Cache |
HU02142 |
All |
Suggested
|
It is possible for a backend unmap process to become stalled, preventing system configuration changes from completing
(show details)
Symptom |
Configuration |
Environment |
Systems running v8.1.0, or later, using DRAID |
Trigger |
None |
Workaround |
None |
|
8.3.1.3 |
Distributed RAID |
HU02208 |
All |
Suggested
|
An issue with the handling of files by quorum can lead to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.2.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.3 |
Quorum |
HU02241 |
All |
Suggested
|
IP Replication can fail to create IP partnerships via the secondary cluster management IP
(show details)
Symptom |
None |
Environment |
All systems |
Trigger |
None |
Workaround |
Use primary management IP to run mkippartnership commands |
|
8.3.1.3 |
IP Replication |
HU02244 |
SVC |
Suggested
|
False positive node error 766 (depleted CMOS battery) messages may appear in the Event Log
(show details)
Symptom |
None |
Environment |
SVC systems with SV1 model nodes running v8.2.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.3 |
System Monitoring |
HU02251 |
All |
Suggested
|
A warmstart may occur when a node receives iSCSI host login/logout requests out of sequence
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running 8.3.1 or later with iSCSI connected hosts |
Trigger |
None |
Workaround |
None |
|
8.3.1.3 |
Hosts, iSCSI |
HU02255 |
All |
Suggested
|
A timing issue in the processing of login requests can cause a single node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.3.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.3 |
Command Line Interface, Graphical User Interface |
HU02281 |
All |
Suggested
|
When upgrading from v8.2.1, or earlier, to v8.3.0, or later, the CLI and GUI may incorrectly show all hosts offline. Checks from the host perspective will show them to be online
(show details)
Symptom |
None |
Environment |
Systems running v7.8, v8.1 or v8.2.x |
Trigger |
None |
Workaround |
None |
|
8.3.1.3 |
Hosts, System Update |
HU02303 & HU02305 |
All |
Suggested
|
Configuration node warmstart will occur if mkhostcluster is run with -ignoreseedvolume and the ignored volumes have an id greater than 256
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.3, or later, where ignored volumes have an id greater than 256 |
Trigger |
Run mkhostcluster with its -ignoreseedvolume option |
Workaround |
Do not use the -ignoreseedvolume option with the mkhostcluster command |
|
8.3.1.3 |
Hosts |
HU02358 |
All |
Suggested
|
An issue in Remote Copy, that stalls a switch of direction, can cause I/O timeouts leading to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Remote Copy |
Trigger |
None |
Workaround |
None |
|
8.3.1.3 |
Global Mirror, Global Mirror With Change Volumes, Metro Mirror |
IT32338 |
All |
Suggested
|
Testing LDAP Authentication fails if username & password are supplied
(show details)
Symptom |
None |
Environment |
Systems running v8.2.1 or later |
Trigger |
Test LDAP Authentication using username & password |
Workaround |
Use remote authenication with LDAP |
|
8.3.1.3 |
LDAP |
HU02182 |
All |
HIPER
|
Cisco MDS switches with old firmware may refuse port logins leading to a loss of access. For more details refer to this Flash
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems connected to Cisco MDS switches running NX-OS versions prior to v8.4(1) |
Trigger |
Upgrade to v8.3.1 |
Workaround |
Ensure any connected Cisco MDS switches are running NX-OS v8.4(1), or later, before upgrading to Spectrum Virtualize v8.3.1 or later |
|
8.3.1.2 |
Backend Storage, Hosts, System Update |
HU02186 (reverted in 8.3.1.4) |
FS5100, FS7200, FS9100, FS9200, V5100, V7000 |
HIPER
|
NVMe drive pulls or firmware upgrades may lead to offline pools with the possibility of a small loss of data integrity. For more details refer to this Flash
(show details)
Symptom |
Data Integrity Loss |
Environment |
Systems running v8.2.1, or later with NVMe drives |
Trigger |
None |
Workaround |
None |
|
8.3.1.2 |
RAID |
HU02212 |
All |
HIPER
|
Remote Copy secondary may have inconsistent data following a stop with -access due to a missing bitmap merge from FlashCopy to Remote Copy. For more details refer to this Flash
(show details)
Symptom |
Data Integrity Loss |
Environment |
Systems running v8.2.1 or later using GMCV or HyperSwap |
Trigger |
None |
Workaround |
None |
|
8.3.1.2 |
Global Mirror With Change Volumes, HyperSwap |
HU02234 |
All |
HIPER
|
An issue in HyperSwap Read Passthrough can cause multiple node warmstarts with the possibility of a loss of access to data
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.3, or later, using HyperSwap |
Trigger |
None |
Workaround |
None |
|
8.3.1.2 |
HyperSwap |
HU02237 |
All |
HIPER
|
Under a rare and complicated set of conditions, a RAID 1 or RAID 10 array may drop a write, causing undetected data corruption. For more details refer to this Flash
(show details)
Symptom |
Data Integrity Loss |
Environment |
Systems using RAID 1 or RAID 10 arrays |
Trigger |
None |
Workaround |
None |
|
8.3.1.2 |
RAID |
HU02238 |
All |
HIPER
|
Force-stopping a FlashCopy map, where the source volume is a Metro or Global Mirror target volume, may cause other FlashCopy maps to return invalid data if they are not 100% copied, in specific configurations. For more details refer to this Flash
(show details)
Symptom |
Data Integrity Loss |
Environment |
Systems using FlashCopy |
Trigger |
None |
Workaround |
None |
|
8.3.1.2 |
FlashCopy, Global Mirror, Metro Mirror |
HU01968 & HU02215 |
All |
Critical
|
An upgrade may fail due to corrupt hardened data in a node. This can affect an I/O group
(show details)
Symptom |
Loss of Access to Data |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.3.1.2 |
System Update |
HU02106 |
All |
Critical
|
Multiple node warmstarts, in quick succession, can cause the partner node to lease expire
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using IP Quorum or NVMe drives as quorum devices |
Trigger |
None |
Workaround |
None |
|
8.3.1.2 |
IP Quorum, Quorum |
HU02135 |
All |
Critical
|
Removing multiple IQNs for an iSCSI host can result in a Tier 2 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.2 or later with iSCSI connected hosts |
Trigger |
Use a single rmhostport command to remove multiple IQN from an iSCSI host |
Workaround |
Remove iSCSI host ports one IQH at a time |
|
8.3.1.2 |
iSCSI |
HU02154 |
All |
Critical
|
If a node is rebooted, when remote support is enabled, then all other nodes will warmstart
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.1 or later using remote support |
Trigger |
With remote support enabled, reboot a node using the 'satask stopnode -reboot <node id>' command |
Workaround |
Temporarily disable remote support when rebooting a node using 'chsra -remotesupport disable' |
|
8.3.1.2 |
Support Remote Assist |
HU02202 |
All |
Critical
|
During an migratevdisk operation if MDisk tiers in the target pool do not match those in the source pool then a Tier 2 recovery may occur
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.3.1 |
Trigger |
None |
Workaround |
Prior to running migratevdisk ensure that the target pool has at least one matching tier compared to the source pool. |
|
8.3.1.2 |
EasyTier |
HU02207 |
All |
Critical
|
If hosts send more concurrent iSCSI commands than a node can handle then it may enter a service state (error 578)
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.3.1, or later, with iSCSI connected hosts |
Trigger |
None |
Workaround |
None |
|
8.3.1.2 |
iSCSI |
HU02216 |
All |
Critical
|
When migrating or deleting a Change Volume of a RC relationship the system might be exposed to a Tier 2 (Automatic Cluster Restart) recovery. When deleting the Change Volumes, the T2 will re-occur which will place the nodes into a 564 state. The migration of the Change Volume will trigger a T2 and recover. For more details refer to this Flash
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.3.1, or later, using remote copy |
Trigger |
None |
Workaround |
The work around in both cases would be to:
- disassociate the Change Volume from the Remote Copy relationship;
- then either delete or migrate the Change Volume,
- and then re-associate or create a new Change Volume back to the Remote Copy relationship.
|
|
8.3.1.2 |
Global Mirror With Change Volumes |
HU02222 |
All |
Critical
|
Where the source volume of an incremental FlashCopy map is also a Metro or Global Mirror target volume that is using a change volume or is a Hyperswap volume, then there is a possibility that not all data will be copied to the FlashCopy target. For more details refer to this Flash
(show details)
Symptom |
Data Integrity Loss |
Environment |
Systems using Remote Copy |
Trigger |
None |
Workaround |
None |
|
8.3.1.2 |
Global Mirror With Change Volumes |
HU02242 |
All |
Critical
|
An iSCSI IP address, with a gateway argument of 0.0.0.0, is not properly assigned to each Ethernet port and any previously set iSCSI IP address may be retained
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.3 or later with iSCSI connected hosts |
Trigger |
Leaving the gateway address as 0.0.0.0 |
Workaround |
Assign a valid gateway address even if no L3 traffic is anticipated |
|
8.3.1.2 |
iSCSI |
IT32631 |
All |
Critical
|
Whilst upgrading the firmware for multiple drives an issue in the firmware checking can initiate a Tier 2 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.3.0, or later, upgrading firmware for multiple drives |
Trigger |
None |
Workaround |
Update firmware one drive at a time |
|
8.3.1.2 |
Drives |
HU02128 |
All |
High Importance
|
Deduplication volume lookup can over utilise resources causing an adverse performance impact
(show details)
Symptom |
Performance |
Environment |
Systems running v8.3 or later using Deduplication |
Trigger |
None |
Workaround |
None |
|
8.3.1.2 |
Data Reduction Pools, Deduplication |
HU02168 |
V5000, V7000 |
High Importance
|
In the event of unexpected power loss a node may not save system data
(show details)
Symptom |
Loss of Redundancy |
Environment |
Storwize V5000 Gen2, V7000 Gen 2 and Gen 2+ systems |
Trigger |
Sudden power loss |
Workaround |
When shutting down always use the CLI, service GUI or management GUI. Do not use removal of electrical supply |
|
8.3.1.2 |
Reliability Availability Serviceability |
HU02203 |
FS9100, V5000, V7000 |
High Importance
|
When a node reboots, it is possible for the node to be unable to communicate with some of the NVMe drives in the enclosure
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems with NVMe drives |
Trigger |
Drive reseat or node reboot |
Workaround |
None |
|
8.3.1.2 |
Drives |
HU02204 |
All |
High Importance
|
After a Tier 2 recovery a node may fail to rejoin the cluster
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running v8.2.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.2 |
Reliability Availability Serviceability |
HU02229 |
FS5000, V5000 |
High Importance
|
An issue in the BIOS firmware of some systems can cause a severe performance impact for iSCSI hosts
(show details)
Symptom |
Performance |
Environment |
Storwize V5000E and FlashSystem 5000 systems running v8.3.1.1 using iSCSI |
Trigger |
None |
Workaround |
None |
|
8.3.1.2 |
iSCSI |
HU01931 |
SVC, V7000 |
Suggested
|
Where a high rate of CLI commands are received, it is possible for inter-node processing code to be delayed which results in a small increase in receive queue time on the config node
(show details)
Symptom |
Performance |
Environment |
SVC and Storwize V7000 systems |
Trigger |
None |
Workaround |
If CPU utilisation is less than 40% then creating a compressed volume may reduce response times |
|
8.3.1.2 |
Performance |
HU02015 |
FS9100, V5000, V7000 |
Suggested
|
Some read-intensive SSDs are incorrectly reporting wear rate thresholds generating unnecessary errors in the Event Log
(show details)
Symptom |
None |
Environment |
Systems using Toshiba M4 Read-Intensive SSDs |
Trigger |
None |
Workaround |
None |
|
8.3.1.2 |
Drives |
HU02091 |
V5000 |
Suggested
|
Upgrading to v8.2.1.8, or later, may result in a licensing error in the Event Log
(show details)
Symptom |
None |
Environment |
Lenovo Storage V Series systems |
Trigger |
Upgrade to v8.2.1.8 or later |
Workaround |
None |
|
8.3.1.2 |
Licensing |
HU02137 |
All |
Suggested
|
An issue with support for target resets in Nimble Storage controllers may cause a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.2, or later, with Nimble Storage backend controllers |
Trigger |
None |
Workaround |
None |
|
8.3.1.2 |
Backend Storage |
HU02175 |
All |
Suggested
|
A GUI issue can cause drive counts to be inconsistent and crash browsers
(show details)
Symptom |
None |
Environment |
Systems running v8.3.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.2 |
Graphical User Interface |
HU02178 |
All |
Suggested
|
IP Quorum hosts may not be shown in lsquorum command output
(show details)
Symptom |
None |
Environment |
Systems running v8.2.1.0, or later, that are using IP Quorum |
Trigger |
None |
Workaround |
None |
|
8.3.1.2 |
IP Quorum |
HU02224 |
All |
Suggested
|
When the RAID component fails to free up memory quickly enough for I/O processing there can be a single node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.2.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.2 |
RAID |
HU02235 |
All |
Suggested
|
The SSH CLI prompt can contain the characters FB after the cluster name
(show details)
Symptom |
None |
Environment |
Systems running v8.2.1.11 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.2 |
Command Line Interface |
HU02341 |
All |
Suggested
|
Cloud Callhome can become disabled due to an internal issue. A related error may not being recorded in the event log
(show details)
Symptom |
None |
Environment |
Systems running v8.3.1 or earlier using Cloud Callhome |
Trigger |
None |
Workaround |
None |
|
8.3.1.2 |
System Monitoring |
IT32440 |
All |
Suggested
|
Under heavy I/O workload the processing of deduplicated I/O may cause a single node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.3.1 or later using Deduplication |
Trigger |
None |
Workaround |
None |
|
8.3.1.2 |
Deduplication |
IT32519 |
All |
Suggested
|
Changing an LDAP users password, in the directory, whilst this user is logged in to the GUI of a Spectrum Virtualize system may result in an account lockout in the directory, depending on the account lockout policy configured for the directory. Existing CLI logins via SSH are not affected
(show details)
Symptom |
None |
Environment |
Systems running v8.2, or later, with GUI logins via LDAP |
Trigger |
Changing an LDAP user's password in the directory whilst that user is logged-in to the GUI of a Spectrum Virtualize system |
Workaround |
LDAP users should ensure they are not logged-in to a Spectrum Virtualize GUI when changing their password |
|
8.3.1.2 |
LDAP |
HU01894 |
All |
HIPER
|
After node reboot, or warmstart, some volumes accessed by AIX, VIO or VMware hosts may experience stuck SCSI2 reservations on the NPIV failover ports of the partner node. This can cause a loss of access to data
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using NPIV to present storage to AIX, VIO or VMware hosts |
Trigger |
None |
Workaround |
Clear reservation by either:
- Unmap & re-map volume;
- LUN reset from host.
|
|
8.3.1.0 |
Hosts |
HU02075 |
All |
HIPER
|
A FlashCopy snapshot, sourced from the target of an Incremental FlashCopy map, can sometimes, temporarily, present incorrect data to the host
(show details)
Symptom |
Data Integrity Loss |
Environment |
Systems using Incremental FlashCopy |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
FlashCopy |
HU02141 |
All |
HIPER
|
An issue in the max replication delay function may trigger a Tier 2 recovery, after posting multiple 1920 errors in the Event Log. For more details refer to this Flash
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using remote copy |
Trigger |
None |
Workaround |
Set the max_replication_delay value to 0 (disabled) |
|
8.3.1.0 |
Global Mirror |
HU02205 |
All |
HIPER
|
Incremental FlashCopy targets can be corrupted when the FlashCopy source is a target of a remote copy relationship
(show details)
Symptom |
Data Integrity Loss |
Environment |
Systems using Incremental FlashCopy with remote copy |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
FlashCopy, Global Mirror, Global Mirror With Change Volumes, Metro Mirror |
HU01967 |
All |
Critical
|
When I/O, in remote copy relationships, experiences delays (1720 and/or 1920 errors are logged) an I/O group may warmstart
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using remote copy |
Trigger |
Performance issues affecting replication I/O |
Workaround |
Use a max replication delay value of 30 seconds or greater |
|
8.3.1.0 |
Global Mirror, Global Mirror With Change Volumes, Metro Mirror |
HU01970 |
All |
Critical
|
When a GMCV relationship is stopped, with the -access option, and the secondary volume is immediately deleted with -force, then all nodes may repeatedly warmstart
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using GMCV |
Trigger |
Stop a GMCV relationship with -access and immediately delete the secondary volume |
Workaround |
Do not remove secondary volume, with -force, if the backward FC map from the secondary change volume to the secondary volume is still in progress |
|
8.3.1.0 |
Global Mirror With Change Volumes |
HU02017 |
All |
Critical
|
Unstable inter-site links may cause a system-wide lease expiry leaving all nodes in a service state - one with error 564 and others with error 551
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.1.2, or later, using HyperSwap with IP Quorum |
Trigger |
Unstable inter-site links |
Workaround |
None |
|
8.3.1.0 |
Reliability Availability Serviceability |
HU02054 |
All |
Critical
|
The event log handler maintains a second list of events. On rare occasions, for log full events, these lists can get out of step, resulting in a Tier 2 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.8 or later |
Trigger |
None |
Workaround |
If an error event log full message (1002) is presented clear the event log, rather than marking that event as fixed |
|
8.3.1.0 |
System Monitoring |
HU02063 |
All |
Critical
|
HyperSwap clusters with only two surviving nodes may experience warmstarts on both of those nodes where rcbuffersize is set to 512MB
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using Hyperswap |
Trigger |
None |
Workaround |
Reduce rcbuffersize to a value less than 512 |
|
8.3.1.0 |
HyperSwap |
HU02065 |
All |
Critical
|
Mishandling of Data Reduction Pool allocation request rejections can lead to node warmstarts that can take an MDisk group offline
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Data Reduction Pools |
HU02066 |
All |
Critical
|
If, during large (>8KB) reads from a host, a medium error is encountered, on backend storage, then there may be node warmstarts, with the possibility of a loss of access to data
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Data Reduction Pools |
HU02108 |
All |
Critical
|
Deleting a managed disk group, with -force, may cause multiple warmstarts with the possibility of a loss of access to data
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Data Reduction Pools |
HU02109 |
All |
Critical
|
Free extents may not be unmapped after volume deletion, or migration, resulting in out-of-space conditions on backend controllers
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.3.0 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Backend Storage, SCSI Unmap |
HU02115 |
All |
Critical
|
Attempting to upgrade all drive firmware, with an inadequate drive package, may lead to multiple node warmstarts, with the possibility of a loss of access to data
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.3.0 or later |
Trigger |
Use the applydrivesoftware -all CLI command, or the GUI drive firmware upgrade function, to upgrade all drives in the presence of >32 drives, for which the drive package has no related firmware. Then take a further configuration action, such as failing a drive. |
Workaround |
Use the utilitydriveupgrade tool to upgrade multiple drives |
|
8.3.1.0 |
Drives |
HU02138 |
All |
Critical
|
An issue in Data Reduction Pool garbage collection can cause I/O timeouts leading to an offline pool
(show details)
Symptom |
Offline Volumes |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Data Reduction Pools |
HU02152 |
All |
Critical
|
Due to an issue in RAID there may be I/O timeouts, leading to node warmstarts, with the possibility of a loss of access to data
(show details)
Symptom |
Loss of Access to Data |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
RAID |
HU02197 |
All |
Critical
|
Bulk volume removals can adversely impact related FlashCopy mappings leading to a Tier 2 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.7.1, or later, using FlashCopy |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
FlashCopy |
IT29867 |
All |
Critical
|
If a change volume, for a remote copy relationship, in a consistency group, runs out of space whilst properties, of the consistency group, are being changed then a Tier 2 recovery may occur
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using remote copy |
Trigger |
None |
Workaround |
Check there is sufficient free space in the pool before changing remote copy parameters |
|
8.3.1.0 |
Global Mirror With Change Volumes |
IT31113 |
All |
Critical
|
After a manual power off and on, of a system, both nodes, in an I/O group, may repeatedly assert into a service state
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.2 or later |
Trigger |
Manual power off and on of a system whilst a RAID rebuild is in progress |
Workaround |
None |
|
8.3.1.0 |
RAID |
IT31300 |
All |
Critical
|
When a snap collection reads the status of PCI devices a CPU can be stalled leading to a cluster-wide lease expiry
(show details)
Symptom |
Loss of Access to Data |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Support Data Collection |
HU01890 |
All |
High Importance
|
FlashCopy mappings, from master volume to primary change volume, may become stalled when a T2 recovery occurs whilst the mappings are in a copying state
(show details)
Symptom |
None |
Environment |
Systems using Global Mirror with Change Volumes |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Global Mirror With Change Volumes |
HU01923 |
All |
High Importance
|
An issue in the way Global Mirror handles write sequence numbers >512 may cause multiple node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems using GM |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Global Mirror |
HU01964 |
All |
High Importance
|
An issue in the cache component may limit I/O throughput
(show details)
Symptom |
Performance |
Environment |
Systems running v8.1.0 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Cache |
HU02037 |
All |
High Importance
|
A FlashCopy consistency group, with a mix of mappings in different states, cannot be stopped
(show details)
Symptom |
None |
Environment |
Systems using FlashCopy |
Trigger |
Some, but not all, mappings, in a consistency group, have their target volumes run out of space |
Workaround |
None |
|
8.3.1.0 |
FlashCopy |
HU02078 |
SVC |
High Importance
|
Heavily unbalanced workloads, in stretched-cluster configurations, can bias inter-node traffic through one port, adversely affecting performance
(show details)
Symptom |
Performance |
Environment |
SVC systems in a stretched-cluster configuration |
Trigger |
None |
Workaround |
Throttle or modify workloads if possible |
|
8.3.1.0 |
Performance |
HU02114 |
FS5000, FS7200, FS9100, FS9200, V7000 |
High Importance
|
Upgrading FCM firmware on multiple I/O group systems can cause a drive to become stuck at 0% sync with the corresponding array in a 'syncing' state
(show details)
Symptom |
Performance |
Environment |
Multiple I/O group systems with Flash Core Modules |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Drives |
HU02132 |
All |
High Importance
|
Removing a thin-provisioned volume and then immediately creating one of the same size may cause node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems using thin-provisioned volumes |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Thin Provisioning |
HU02143 |
All |
High Importance
|
The performance profile, for some enterprise tier drives, may not correctly match the drives capabilities leading to that tier being overdriven
(show details)
Symptom |
Performance |
Environment |
Systems running v8.2 or later using EasyTier. Note: This issue does not affect DRAID 5 arrays with stripe width of 8 or 9, or DRAID6 arrays with stripe width of 10 or 12. |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
EasyTier |
HU02169 |
All |
High Importance
|
After a Tier 3 recovery, different nodes may report different UIDs for a subset of volumes
(show details)
Symptom |
Configuration |
Environment |
Systems that have just experienced a Tier 3 recovery |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Hosts |
HU02206 |
All |
High Importance
|
Garbage collection can operate at inappropriate times, generating inefficient backend workload, adversely affecting flash drive write endurance and overloading nearline drives
(show details)
Symptom |
Performance |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Data Reduction Pools |
HU01746 |
All |
Suggested
|
Adding a volume copy may deactivate any associated MDisk throttling
(show details)
Symptom |
Configuration |
Environment |
Systems running v7.8 or later using MDisk throttling |
Trigger |
Adding a volume copy to an MDisk that is throttled |
Workaround |
None |
|
8.3.1.0 |
Throttling |
HU01796 |
All |
Suggested
|
Battery Status LED may not illuminate
(show details)
Symptom |
None |
Environment |
SVC systems using SV1 model nodes |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
System Monitoring |
HU01891 |
All |
Suggested
|
An issue in DRAID grain process scheduling can lead to a duplicate entry condition that is cleared by a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.7.1, or later, using DRAID |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Distributed RAID |
HU01943 |
All |
Suggested
|
Stopping a GMCV relationship with the -access flag may result in more processing than is required
(show details)
Symptom |
None |
Environment |
Systems using Systems using GMCV |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Global Mirror With Change Volumes |
HU01953 |
All |
Suggested
|
Following a Data Reduction Pool recovery, in some circumstances, it may not be possible to create new volumes, via the GUI, due to an incorrect value being returned from the lsmdiskgrp
(show details)
Symptom |
None |
Environment |
Systems running v8.1.0 or later using Data Reduction Pools |
Trigger |
None |
Workaround |
Create new volumes using CLI |
|
8.3.1.0 |
Graphical User Interface |
HU02021 |
All |
Suggested
|
Disabling garbage collection may cause a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Data Reduction Pools |
HU02023 |
All |
Suggested
|
An issue with the processing of FlashCopy map commands may result in a single node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using FlashCopy |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Command Line Interface, FlashCopy |
HU02026 |
All |
Suggested
|
A timing window issue in the processing of FlashCopy status listing commands can cause a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using FlashCopy |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Command Line Interface, FlashCopy |
HU02040 |
V5000 |
Suggested
|
VPD contains the incorrect FRU part number for the SAS adapter
(show details)
Symptom |
None |
Environment |
Storwize V5030 systems |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Reliability Availability Serviceability |
HU02048 |
All |
Suggested
|
An issue in the handling of ATS commands from VMware hosts can cause a single node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.8, or later, presenting volumes to VMware hosts |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Hosts |
HU02052 |
All |
Suggested
|
During an upgrade an issue, with buffer handling, in Data Reduction Pool can lead to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Data Reduction Pools |
HU02062 |
All |
Suggested
|
An issue, with node index numbers for I/O groups, when using 32Gb HBAs may result in host ports incorrectly being reported offline
(show details)
Symptom |
Configuration |
Environment |
Systems running v8.3.0 or later using 32Gb HBAs |
Trigger |
Change I/O group layout by removing/adding nodes |
Workaround |
None |
|
8.3.1.0 |
Hosts |
HU02085 |
All |
Suggested
|
Freeze time of Global Mirror remote copy consistency groups may not be updated correctly in certain scenarios
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v7.8 or later using Global Mirror |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Global Mirror |
HU02099 |
All |
Suggested
|
Cloud callhome error 3201 messages may appear in the Event Log
(show details)
Symptom |
None |
Environment |
Systems running v8.2.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
System Monitoring |
HU02102 |
All |
Suggested
|
Excessive processing time required for FlashCopy bitmap operations, associated with large (> 20TB) Global Mirror change volumes, may lead to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Global Mirror with Change Volumes where some GMCV volumes are >20TB |
Trigger |
None |
Workaround |
Limit GMCV volume capacity to 20TB or less |
|
8.3.1.0 |
Global Mirror With Change Volumes |
HU02103 |
FS9100, V5000, V7000 |
Suggested
|
The system management firmware may, incorrectly, attempt to obtain an IP address, using DHCP, making it accessible via Ethernet
(show details)
Symptom |
None |
Environment |
FlashSystem 9100, Storwize V7000 Gen 3 and Storwize V5100 systems |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
|
HU02111 |
All |
Suggested
|
An issue with how Data Reduction Pool handles data, at the sub-extent level, may result in a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Data Reduction Pools |
HU02119 |
FS5000, FS7200, FS9100, FS9200, V7000 |
Suggested
|
NVMe drive replacement on 8.3.0.0 or 8.3.0.1 may result in the GUI, and lsdrive CLI command, showing a ghost drive
(show details)
Symptom |
None |
Environment |
Systems with NVME drives |
Trigger |
- Upgrade an NVMe drive's firmware on 8.3.0.0 or 8.3.0.1;
- Replace that drive while still on 8.3.0.0 or 8.3.0.1
|
Workaround |
None |
|
8.3.1.0 |
Drives |
HU02146 |
All |
Suggested
|
An issue in inter-node message handling may cause a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Reliability Availability Serviceability |
HU02157 |
All |
Suggested
|
Issuing a mkdistributedarray command may result in a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using DRAID |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Distributed RAID |
HU02173 |
All |
Suggested
|
During a pending fabric login, when an abort is received, it is possible for a related entry in the WWPN table to not be removed. The node will warmstart to clear this condition
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.2 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Reliability Availability Serviceability |
HU02183 |
All |
Suggested
|
An issue in the way inter-node communication is handled can lead to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.2 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Reliability Availability Serviceability |
HU02190 |
All |
Suggested
|
Error 1046 not triggering a Call Home even though it is a hardware fault
(show details)
Symptom |
None |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
System Monitoring |
HU02214 |
All |
Suggested
|
Under a certain I/O pattern it is possible for metadata management in Data Reduction Pools to become inconsistent leading to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Data Reduction Pools |
HU02247 |
All |
Suggested
|
Unnecessary Ethernet MAC flapping messages reported in switch logs
(show details)
Symptom |
None |
Environment |
FlashSystem 9100, Storwize V7000 Gen 3 and V5100 systems |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Reliability Availability Serviceability |
HU02285 |
All |
Suggested
|
Single node warmstart due to cache resource allocation issue
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.2.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Cache |
IT21896 |
All |
Suggested
|
Where encryption keys have been lost it will not be possible to remove an empty MDisk group
(show details)
Symptom |
None |
Environment |
Systems using encryption |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
Encryption |
IT30306 |
All |
Suggested
|
A timing issue in callhome function initialisation may cause a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.3.1.0 |
System Monitoring |