HU02143 |
All |
High Importance
|
The performance profile, for some enterprise tier drives, may not correctly match the drives capabilities leading to that tier being overdriven
(show details)
Symptom |
Performance |
Environment |
Systems running v8.2 or later using EasyTier. Note: This issue does not affect DRAID 5 arrays with stripe width of 8 or 9, or DRAID6 arrays with stripe width of 10 or 12. |
Trigger |
None |
Workaround |
None |
|
8.3.0.3 |
EasyTier |
HU02104 |
All |
HIPER
|
An issue in the RAID component, in the presence of very high I/O workload and the exhaustion of cache resources, can see a deadlock condition occurring which prevents further I/O processing. The system detects this issue and takes the storage pool offline for a six minute period, to clear the problem. The pool is then brought online automatically, and normal operation resumes. For more details refer to this Flash
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.2.1 or later |
Trigger |
None |
Workaround |
Consider using a pool throttle to limit the I/O throughput |
|
8.3.0.2 |
RAID |
HU02237 |
All |
HIPER
|
Under a rare and complicated set of conditions, a RAID 1 or RAID 10 array may drop a write, causing undetected data corruption. For more details refer to this Flash
(show details)
Symptom |
Data Integrity Loss |
Environment |
Systems using RAID 1 or RAID 10 arrays |
Trigger |
None |
Workaround |
None |
|
8.3.0.2 |
RAID |
HU02238 |
All |
HIPER
|
Force-stopping a FlashCopy map, where the source volume is a Metro or Global Mirror target volume, may cause other FlashCopy maps to return invalid data if they are not 100% copied, in specific configurations. For more details refer to this Flash
(show details)
Symptom |
Data Integrity Loss |
Environment |
Systems using FlashCopy |
Trigger |
None |
Workaround |
None |
|
8.3.0.2 |
FlashCopy, Global Mirror, Metro Mirror |
HU02109 |
All |
Critical
|
Free extents may not be unmapped after volume deletion, or migration, resulting in out-of-space conditions on backend controllers
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.3.0 or later |
Trigger |
None |
Workaround |
None |
|
8.3.0.2 |
Backend Storage, SCSI Unmap |
HU02115 |
All |
Critical
|
Attempting to upgrade all drive firmware, with an inadequate drive package, may lead to multiple node warmstarts, with the possibility of a loss of access to data
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.3.0 or later |
Trigger |
Use the applydrivesoftware -all CLI command, or the GUI drive firmware upgrade function, to upgrade all drives in the presence of >32 drives, for which the drive package has no related firmware. Then take a further configuration action, such as failing a drive. |
Workaround |
Use the utilitydriveupgrade tool to upgrade multiple drives |
|
8.3.0.2 |
Drives |
HU02114 |
FS5000, FS9100, V7000 |
High Importance
|
Upgrading FCM firmware on multiple I/O group systems can cause a drive to become stuck at 0% sync with the corresponding array in a 'syncing' state
(show details)
Symptom |
Performance |
Environment |
Multiple I/O group systems with Flash Core Modules |
Trigger |
None |
Workaround |
None |
|
8.3.0.2 |
Drives |
HU02062 |
All |
Suggested
|
An issue, with node index numbers for I/O groups, when using 32Gb HBAs may result in host ports incorrectly being reported offline
(show details)
Symptom |
Configuration |
Environment |
Systems running v8.3.0 or later using 32Gb HBAs |
Trigger |
Change I/O group layout by removing/adding nodes |
Workaround |
None |
|
8.3.0.2 |
Hosts |
HU02102 |
All |
Suggested
|
Excessive processing time required for FlashCopy bitmap operations, associated with large (> 20TB) Global Mirror change volumes, may lead to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Global Mirror with Change Volumes where some GMCV volumes are >20TB |
Trigger |
None |
Workaround |
Limit GMCV volume capacity to 20TB or less |
|
8.3.0.2 |
Global Mirror With Change Volumes |
HU01998 |
All |
HIPER
|
All SCSI command types can set volumes as busy resulting in I/O timeouts and multiple node warmstarts, with the possibility of an offline I/O group. For more details refer to this Flash
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.3.0.1 |
Hosts |
HU02014 |
SVC |
HIPER
|
After a loss of power, where a node has a dead CMOS battery, it will fail to restart correctly. It is possible for both nodes in an I/O group to experience this issue
(show details)
Symptom |
Loss of Access to Data |
Environment |
SVC systems using SV1 model nodes |
Trigger |
None |
Workaround |
None |
|
8.3.0.1 |
Reliability Availability Serviceability |
HU02064 |
SVC, V7000 |
HIPER
|
An issue in the firmware for compression accelerator cards can cause offline compressed volumes. For more details refer to this Flash
(show details)
Symptom |
Offline Volumes |
Environment |
Systems running v8.2.1.x, or later, using hardware compression |
Trigger |
None |
Workaround |
None |
|
8.3.0.1 |
Compression |
HU02083 |
All |
HIPER
|
During DRAID rebuilds, an issue in the handling of memory buffers can lead to multiple node warmstarts and a loss of access to data. For more details refer to this Flash
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.2.1.6 or v8.3.0.0 using DRAID. Probability is highest for systems with an exact multiple of 48 drives and a stripe width of 16 |
Trigger |
None |
Workaround |
None |
|
8.3.0.1 |
Distributed RAID |
HU01924 |
All |
Critical
|
Migrating extents to an MDisk, that is not a member of an MDisk group, may result in a Tier 2 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
All systems |
Trigger |
Migrate extents to an MDisk, that is not a member of an MDisk group |
Workaround |
Only specify a target MDisk that is part of the same MDisk group as the volume copy having extents migrated |
|
8.3.0.1 |
Thin Provisioning |
HU02016 |
SVC |
Critical
|
A memory leak in the component that handles thin-provisioned MDisks can lead to an adverse performance impact with the possibility of offline MDisks. For more details refer to this Flash
(show details)
Symptom |
Offline Volumes |
Environment |
SVC systems |
Trigger |
None |
Workaround |
None |
|
8.3.0.1 |
Backend Storage |
HU02036 |
All |
Critical
|
It is possible for commands, that alter pool-level extent reservations (i.e. migratevdisk or rmmdisk), to conflict with an ongoing EasyTier migration, resulting in a Tier 2 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.2 or later with EasyTier enabled |
Trigger |
None |
Workaround |
Disable EasyTier on the source pool; Wait 10 minutes, so that any ongoing EasyTier requests complete; Issue the migratevdisk/rmmdisk command; Enable EasyTier on the source pool. |
|
8.3.0.1 |
EasyTier |
HU02043 |
All |
Critical
|
Collecting a snap can cause nodes to run out of boot drive space and go offline with node error 565
(show details)
Symptom |
Loss of Access to Data |
Environment |
All systems |
Trigger |
Taking many snap data collections on the same config node |
Workaround |
Manually delete unneeded snaps from the boot drive |
|
8.3.0.1 |
Support Data Collection |
HU02044 |
All |
Critical
|
Multiple DRAID arrays can, where one is performing a rebuild, be exposed to a RAID deadlock condition resulting in multiple node warmstarts and a loss of access to data
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using Distributed RAID with Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.3.0.1 |
Data Reduction Pools, Distributed RAID |
HU02045 |
All |
Critical
|
When a node is removed from the cluster, using CLI, it may still be shown as online in the GUI. If an attempt is made to shutdown this node, from the GUI, whilst it appears to be online, then the whole cluster will shutdown
(show details)
Symptom |
Loss of Access to Data |
Environment |
All systems |
Trigger |
Remove a node from the cluster using CLI. With the node showing as online, use the GUI to shut it down |
Workaround |
Manually refresh GUI browser page, after removing a node via CLI |
|
8.3.0.1 |
Graphical User Interface |
HU02050 |
FS9100, V5000, V7000 |
Critical
|
Compression hardware can have an issue processing certain types of data resulting in node reboots and marking the compression hardware as faulty even though it is serviceable
(show details)
Symptom |
Loss of Access to Data |
Environment |
FlashSystem 9100, Storwize V5100 and V7000 Gen 3 systems |
Trigger |
None |
Workaround |
None |
|
8.3.0.1 |
Compression |
HU02077 |
All |
Critical
|
A node upgrading to v8.2.1 or later will lose access to controllers directly-attached to its FC ports and the upgrade will stall
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems that are FC direct-attached to backend storage controllers |
Trigger |
System upgrade |
Workaround |
None |
|
8.3.0.1 |
Backend Storage |
HU02086 |
All |
Critical
|
An issue, in IP Quorum, may cause a Tier 2 recovery, during initial connection to a candidate device
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.2.1.0 or later that are using IP Quorum |
Trigger |
None |
Workaround |
None |
|
8.3.0.1 |
IP Quorum |
HU02089 |
All |
Critical
|
Due to changes to quorum management, during an upgrade to v8.2.x, or later, there may be multiple warmstarts, with the possibility of a loss of access to data
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.1.3 or earlier with normal configurations of more than 6 nodes, or multi-site configurations of more than 4 nodes, and no external shared MDisks |
Trigger |
Upgrading to v8.2.x or later |
Workaround |
None |
|
8.3.0.1 |
System Update |
HU02097 |
All |
Critical
|
Workloads, with data that is highly suited to deduplication, can provoke high CPU utilisation, as multiple destinations try to dedupe to one source. This adversely impacts performance with the possibility of offline MDisk groups
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.3.0.1 |
Data Reduction Pools |
IT30595 |
All |
Critical
|
A resource shortage in the RAID component can cause MDisks to be taken offline
(show details)
Symptom |
Offline Volumes |
Environment |
Systems running v8.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.0.1 |
RAID |
HU02006 |
All |
High Importance
|
Garbage collection behaviour can become overzealous, adversely affect performance
(show details)
Symptom |
Performance |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.3.0.1 |
Data Reduction Pools |
HU02053 |
FS9100, V5100, V7000 |
High Importance
|
An issue with canister BIOS update can stall system upgrades
(show details)
Symptom |
Loss of Redundancy |
Environment |
FS9100, V7000 Gen 3 and V5100 systems |
Trigger |
Upgrade to v8.3.0 |
Workaround |
None |
|
8.3.0.1 |
System Update |
HU02055 |
All |
High Importance
|
Creating a FlashCopy snapshot, in the GUI, does not set the same preferred node for both source and target volumes. This may adversely impact performance
(show details)
Symptom |
Performance |
Environment |
Systems using FlashCopy |
Trigger |
None |
Workaround |
Use the movevdisk command to manually set the same preferred node for both the source and target volumes in the FC map |
|
8.3.0.1 |
FlashCopy |
HU02072 |
All |
High Importance
|
An issue in the handling of email transmission can write a large file to the node boot drive. If this causes the boot drive to become full, the node will go offline with error 565
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running v8.2.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.0.1 |
System Monitoring |
HU02080 |
All |
High Importance
|
When a Data Reduction Pool is running low on free space, the credit allocation algorithm, for garbage collection, can be exposed to a race condition, adversely affecting performance
(show details)
Symptom |
Performance |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.3.0.1 |
Data Reduction Pools |
HU02130 |
V5000, V7000 |
High Importance
|
An issue with the RAID scrub process can overload Nearline SAS drives causing premature failures
(show details)
Symptom |
None |
Environment |
Storwize systems running v8.2.1, or later, using Nearline SAS drives |
Trigger |
None |
Workaround |
None |
|
8.3.0.1 |
Drives |
IT29975 |
All |
High Importance
|
During Ethernet port configuration, netmask validation will only accept a fourth octet of zero. Non-zero values will cause the interface to remain inactive
(show details)
Symptom |
Configuration |
Environment |
Systems running v8.3.0 or later |
Trigger |
Set the fourth octet of the netmask to a non-zero value |
Workaround |
Set the fourth octet of the netmask to zero |
|
8.3.0.1 |
iSCSI |
HU02067 |
All |
Suggested
|
If multiple recipients are specified, for callhome emails, then no callhome emails will be sent
(show details)
Symptom |
None |
Environment |
Systems running v8.2.1.5 or later |
Trigger |
Specify multiple recipients for callhome email messages |
Workaround |
None |
|
8.3.0.1 |
System Monitoring |
HU02073 |
All |
Suggested
|
Detection of an invalid list entry in the parity handling process can lead to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.3.0.1 |
RAID |
HU02079 |
All |
Suggested
|
Starting a FlashCopy mapping, within a Data Reduction Pool, a large number of times may cause a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Data Reduction Pools with FlashCopy |
Trigger |
Start a FlashCopy mapping, in a Data Reduction Pool, approximately 65,000 times |
Workaround |
None |
|
8.3.0.1 |
Data Reduction Pools, FlashCopy |
HU02084 |
FS9100, V5000, V7000 |
Suggested
|
If a node goes offline, after the firmware of multiple NVMe drives has been upgraded, then incorrect 3090/90021 errors may be seen in the Event Log
(show details)
Symptom |
None |
Environment |
Systems running v8.3.0.0 with NVMe drives |
Trigger |
None |
Workaround |
None |
|
8.3.0.1 |
Drives |
HU02087 |
All |
Suggested
|
LDAP users with SSH keys cannot create volumes after upgrading to 8.3.0.0
(show details)
Symptom |
None |
Environment |
Systems with LDAP user accounts that have SSH keys |
Trigger |
Login as a LDAP user with SSH keys |
Workaround |
Use an LDAP user without an SSH key, or a locally-authenticated user, to create volumes |
|
8.3.0.1 |
LDAP |
HU02126 |
SVC, V5000, V7000 |
Suggested
|
There is a low probability that excessive SSH connections may trigger a single node warmstart on the configuration node
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems with Gen 1 and 2 hardware |
Trigger |
More than one SSH connection attempt per second might occasionally cause the config node to warmstart |
Workaround |
Reduce frequency of SSH connections |
|
8.3.0.1 |
Command Line Interface |
HU02129 |
All |
Suggested
|
GUI drive filtering fails with An error occurred loading table data
(show details)
Symptom |
None |
Environment |
Systems running v8.2.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.0.1 |
Graphical User Interface |
HU02131 |
All |
Suggested
|
When changing DRAID configuration, for an array with an active workload, a deadlock condition can occur resulting in a single node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.2.1, or later, using DRAID |
Trigger |
None |
Workaround |
None |
|
8.3.0.1 |
Distributed RAID |
IT30448 |
All |
Suggested
|
If an IP Quorum app is killed, during the commit phase of a code upgrade, then that offline IP Quorum device cannot be removed, post upgrade
(show details)
Symptom |
Configuration |
Environment |
Systems running v8.1.3, or earlier, using IP Quorum |
Trigger |
Upgrade to v8.2.0 or later, kill an IP Quorum app during the commit phase |
Workaround |
None |
|
8.3.0.1 |
IP Quorum |
HU02007 |
All |
HIPER
|
During volume migration an issue, in the handling of old to new extents transfer, can lead to cluster-wide warmstarts
(show details)
Symptom |
Loss of Access to Data |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
Storage Virtualisation |
HU01888 & HU01997 |
All |
Critical
|
An issue with restore mappings, in the FlashCopy component, can cause an I/O group to warmstart
(show details)
Symptom |
Loss of Access to Data |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
FlashCopy |
HU01909 |
All |
Critical
|
Upgrading a system with Read-Intensive drives to 8.2, or later, may result in node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems using DRAID with Read-Intensive drives |
Trigger |
Upgrade to v8.2 or later |
Workaround |
None |
|
8.3.0.0 |
Distributed RAID, Drives, System Update |
HU01921 |
All |
Critical
|
Where FlashCopy mapping targets are also in remote copy relationships there may be node warmstarts with a temporary loss of access to data
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using FlashCopy with remote copy |
Trigger |
None |
Workaround |
If one reverse FlashCopy mapping has been stopped and another FlashCopy mapping, to the same target, is to be started, then delete the first reverse FlashCopy mapping before starting the second |
|
8.3.0.0 |
FlashCopy, Global Mirror, Metro Mirror |
HU01933 |
All |
Critical
|
Under rare circumstances the Data Reduction Pool deduplication rehoming process can become truncated. Subsequent detection of inconsistent metadata can lead to offline Data Reduction Pools
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.1.3 or later using Deduplication |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
Data Reduction Pools, Deduplication |
HU01985 |
All |
Critical
|
As a consequence of a Data Reduction Pool recovery, bad metadata may be created. When the region of disk associated with the bad metadata is accessed there may be an I/O group warmstarts
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
Data Reduction Pools |
HU01989 |
All |
Critical
|
For large drives, bitmap scanning, during an array rebuild, can timeout resulting in multiple node warmstarts, possibly leading to offline I/O groups
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using DRAID with drives of 8TB or more |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
Distributed RAID |
HU01990 |
All |
Critical
|
Bad return codes from the partnership compression component can cause multiple node warmstarts taking nodes offline
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v7.8 or later using remote copy with compression |
Trigger |
None |
Workaround |
Disable partnership compression |
|
8.3.0.0 |
Global Mirror, Global Mirror With Change Volumes, Metro Mirror |
HU02005 |
All |
Critical
|
An issue in the background copy process prevents grains, above a 128TB limit, from being cleaned properly. As a consequence there may be multiple node warmstarts with the potential for a loss of access to data
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using remote copy |
Trigger |
Volumes greater than 128TB being added to remote copy relationships |
Workaround |
The maximum size of volumes in remote copy relationships should be limited to 128TB |
|
8.3.0.0 |
Global Mirror, Global Mirror With Change Volumes, Metro Mirror |
HU02009 |
All |
Critical
|
Systems which are using Data Reduction Pools, with the maximum possible extent size of 8GB, and which experience a very specific I/O workload, may experience an issue due to garbage collection. This can cause repeated node warmstarts and loss of access to data
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
Data Reduction Pools |
HU02027 |
All |
Critical
|
Fabric congestion can cause internal resource constraints, in 16Gb HBAs, leading to lease expiries
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using 16Gb HBAs |
Trigger |
Fabric congestion affecting local node-node traffic |
Workaround |
Prevent fabric congestion that might affect local node-node connectivity |
|
8.3.0.0 |
Reliability Availability Serviceability |
HU02121 |
All |
Critical
|
When the system changes from copyback to rebuild a failure to clear related metadata can cause multiple node warmstarts, with the possibility of a loss of access
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running v8.2.1.x, or later, that are using DRAID |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
Distributed RAID |
HU02275 |
All |
Critical
|
Performing any sort of hardware maintenance during an upgrade may cause a cluster to destroy itself, with nodes entering candidate or service state 550
(show details)
Symptom |
Loss of Access to Data |
Environment |
All systems |
Trigger |
Perform hardware maintenance during an upgrade |
Workaround |
Avoid hardware maintenance during code upgrades |
|
8.3.0.0 |
System Update |
IT25367 |
All |
Critical
|
A T2 recovery may occur when an attempt is made to upgrade, or downgrade, the firmware for an unsupported drive type
(show details)
Symptom |
Loss of Access to Data |
Environment |
All systems |
Trigger |
Attempt to upgrade/downgrade the firmware for an unsupported drive type |
Workaround |
None |
|
8.3.0.0 |
Drives |
IT26257 |
All |
Critical
|
Starting a relationship, when the remote volume is offline, may result in a T2 recovery
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using Hyperswap |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
HyperSwap |
HU01836 |
All |
High Importance
|
When an auxiliary volume is moved an issue with pausing the master volume can lead to node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems using Hyperswap |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
HyperSwap |
HU01904 |
All |
High Importance
|
A timing issue can cause a remote copy relationship to become stuck, in a pausing state, resulting in a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using remote copy |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
Global Mirror, Global Mirror With Change Volumes, Metro Mirror |
HU01919 |
FS9100, V7000 |
High Importance
|
During an upgrade some components may take too long to initialise causing node warmstarts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
FS9100 and V7000 Gen 3 systems |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
System Update |
HU01942 |
FS9100, V5000, V7000 |
High Importance
|
NVMe drive ports can go offline, for a very short time, when an upgrade of that drives firmware commences
(show details)
Symptom |
None |
Environment |
FlashSystem 9100, Storwize V7000 Gen 3 and Storwize V5100 systems |
Trigger |
Start a NVMe drive firmware upgrade |
Workaround |
None |
|
8.3.0.0 |
Drives |
HU01969 |
All |
High Importance
|
It is possible, after an rmrcrelationship command is run, that the connection to the remote cluster may be lost
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using remote copy |
Trigger |
None |
Workaround |
Two workarounds are: Run the delete rmrcrelationship command on the system that is currently the primary for that volume; Stop the relationship before deleting it. |
|
8.3.0.0 |
Global Mirror, Global Mirror With Change Volumes, Metro Mirror |
HU02011 |
All |
High Importance
|
When a node warmstart occurs on a system using Data Reduction Pools, there is a small possibility that the node will not automatically return online. If the partner node is also offline, this can cause temporary loss of access to data
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
Data Reduction Pools |
HU02012 |
All |
High Importance
|
Under certain I/O workloads the garbage collection process can adversely impact volume write response times
(show details)
Symptom |
Performance |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
Data Reduction Pools |
HU02051 |
All |
High Importance
|
If unexpected actions are taken during node replacement, node warmstarts and temporary loss of access to data may occur. This issue can only occur if a node is replaced, and then the old node is re-added to the cluster
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems with two nodes |
Trigger |
Replacing a node, and then re-adding the old node to the system |
Workaround |
Remove cluster data from the old node before adding the new node |
|
8.3.0.0 |
Reliability Availability Serviceability |
HU02123 |
All |
High Importance
|
For direct-attached hosts, a race condition between the FLOGI and Link UP processes can result in FC ports not coming online
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems with direct-attached hosts |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
Hosts |
HU02133 |
FS9100, V5000, V7000 |
High Importance
|
NVMe drives may become degraded after a drive reseat or node reboot
(show details)
Symptom |
None |
Environment |
Systems with NVMe drives |
Trigger |
Drive reseat or node reboot |
Workaround |
None |
|
8.3.0.0 |
Drives |
HU02149 |
SVC |
High Importance
|
When an Enhanced Stretch Cluster is using NPIV, in transitional mode, the path priority is not being reported correctly to some hosts
(show details)
Symptom |
Performance |
Environment |
Systems in an Enhanced Stretch Cluster topology that are using NPIV in its transitional mode |
Trigger |
None |
Workaround |
Manually set the preferred path if possible within the host's MPIO settings. Run NPIV in enabled or disabled mode |
|
8.3.0.0 |
Hosts |
HU02288 |
All |
High Importance
|
A node might fail to come online after a reboot or warmstart such as during an upgrade
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems in a Stretched or HyperSwap topology |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
Reliability Availability Serviceability |
HU02318 |
All |
High Importance
|
An issue in the handling of iSCSI host I/O may cause a node to kernel panic and go into service with error 578
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems running v8.2 or later with iSCSI connected hosts |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
iSCSI |
HU01777 |
All |
Suggested
|
Where not all I/O groups have NPIV enabled, hosts may be shown as Degraded with an incorrect count of node logins
(show details)
Symptom |
Configuration |
Environment |
Systems using NPIV |
Trigger |
None |
Workaround |
Enable NPIV on all I/O groups |
|
8.3.0.0 |
Command Line Interface |
HU01843 |
All |
Suggested
|
A node hardware issue can cause a CLI command to timeout resulting in a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
Command Line Interface |
HU01868 |
All |
Suggested
|
After deleting an encrypted external MDisk, it is possible for the encrypted status of volumes to change to no, even though all remaining MDisks are encrypted
(show details)
Symptom |
None |
Environment |
Systems using encryption |
Trigger |
Delete an encrypted external MDisk |
Workaround |
Ensure that all MDisks in the MDisk group are encrypted - this will ensure that data is encrypted |
|
8.3.0.0 |
Encryption |
HU01872 |
All |
Suggested
|
An issue with cache partition fairness can favour small IOs over large ones leading to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
All systems |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
Cache |
HU01880 |
All |
Suggested
|
When a write, to a secondary volume, becomes stalled, a node at the primary site may warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using remote copy |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
Global Mirror, Global Mirror With Change Volumes, Metro Mirror |
HU01892 |
All |
Suggested
|
LUNs of greater than 2TB, presented by HP XP7 storage controllers, are not supported
(show details)
Symptom |
Configuration |
Environment |
Systems with HP XP7 backend controllers |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
Backend Storage |
HU01917 |
All |
Suggested
|
Chrome browser support requires a self-signed certificate to include subject alternate name
(show details)
Symptom |
None |
Environment |
Systems accessed using the Chrome browser |
Trigger |
None |
Workaround |
Accept invalid certificate |
|
8.3.0.0 |
Graphical User Interface |
HU01936 |
All |
Suggested
|
When shrinking a volume, that has host mappings, there may be recurring node warmstarts
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.1 or later |
Trigger |
Shrink a volume while it is mapped to a host |
Workaround |
Remove all host mappings, for a volume, before performing shrinkvdisksize |
|
8.3.0.0 |
Cache |
HU01955 |
All |
Suggested
|
The presence of unsupported configurations, in a Spectrum Virtualize environment, can cause a mishandling of unsupported commands leading to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.1 or later |
Trigger |
Unsupported configurations (e.g. replication partners running EOS code levels) |
Workaround |
Correct unsupported configurations in the Spectrum Virtualize environment |
|
8.3.0.0 |
Reliability Availability Serviceability |
HU01956 |
All |
Suggested
|
The output from a lsdrive command shows the write endurance usage, for new read-intensive SSDs, as blank rather than 0%
(show details)
Symptom |
None |
Environment |
Systems using read-intensive SSDs |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
Drives |
HU01963 |
All |
Suggested
|
A deadlock condition in the deduplication component can lead to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.1.3 or later |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
Deduplication |
HU01974 |
All |
Suggested
|
With all Remote Support Assistant connections closed, the GUI may show that a connection is still in progress
(show details)
Symptom |
None |
Environment |
Systems running v8.1 or later using Remote Support Assistance |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
System Monitoring |
HU01978 |
All |
Suggested
|
Unable to create HyperSwap volumes. The mkvolume command fails with CMMVC7050E error
(show details)
Symptom |
None |
Environment |
Systems running v8.2 or later using HyperSwap |
Trigger |
None |
Workaround |
Use the early (pre mkvolume) procedure for creating HyperSwap volumes |
|
8.3.0.0 |
HyperSwap |
HU01979 |
All |
Suggested
|
The figure for used_virtualization, in the output of a lslicense command, may be unexpectedly large
(show details)
Symptom |
None |
Environment |
Systems running v8.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
Command Line Interface |
HU01982 |
All |
Suggested
|
In an environment, with multiple IP Quorum servers, if the quorum component encounters a duplicate UID then a node may warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running v8.2.1 or later running IP quorum |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
IP Quorum |
HU01983 |
All |
Suggested
|
Improve debug data capture to assist in determining the reason for a Data Reduction Pool to be taken offline
(show details)
Symptom |
None |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
Data Reduction Pools |
HU01986 |
All |
Suggested
|
An accounting issue in the FlashCopy component may cause node warmstarts
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using FlashCopy |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
FlashCopy |
HU01991 |
All |
Suggested
|
An issue in the handling of extent allocation, in the Data Reduction Pool component, can cause a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
Data Reduction Pools |
HU02020 |
FS9100, V5000, V7000 |
Suggested
|
An internal hardware bus, running at the incorrect speed, may give rise to spurious DIMM over-temperature errors
(show details)
Symptom |
None |
Environment |
FlashSystem 9100, Storwize V7000 Gen 3 and Storwize V5100 systems |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
Reliability Availability Serviceability |
HU02029 |
All |
Suggested
|
An issue with the SSMTP process may result in failed callhome, inventory reporting and user notifications. A testemail command will fail with a CMMVC9051E error
(show details)
Symptom |
None |
Environment |
Systems running v8.2.1 or later |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
System Monitoring |
HU02039 |
All |
Suggested
|
An issue in the management steps of Data Reduction Pool recovery may lead to a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Data Reduction Pools |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
Data Reduction Pools |
HU02059 |
All |
Suggested
|
Event Log may display quorum errors even though quorum devices are available
(show details)
Symptom |
None |
Environment |
Systems running v8.2.1 |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
Quorum |
HU02090 |
FS9100, V5000, V7000 |
Suggested
|
When a failing drive experiences an error, RAID may mishandle it, resulting in a node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
FlashSystem 9100 and Storwize systems |
Trigger |
None |
Workaround |
None |
|
8.3.0.0 |
RAID |
HU02134 |
All |
Suggested
|
A timing issue, in handling chquorum CLI commands, can result in fewer than three quorum devices being available
(show details)
Symptom |
Configuration |
Environment |
Systems running v8.2 or later |
Trigger |
None |
Workaround |
Issue a chquorum CLI command to re-select the missing quorum device |
|
8.3.0.0 |
Quorum |