| SVAPAR-172745 |
All |
HIPER
|
Systems using policy based high availability (PBHA) on 9.1.0.0 may experience a detected data loss after performing configuration changes and multiple failovers of the active management system.
(show details)
| Symptom |
Data Integrity Loss |
| Environment |
Systems running 9.1.0.0 with PBHA configured |
| Trigger |
Failing over the AMS to the other system |
| Workaround |
None |
|
9.1.1.0 |
Policy-based High availability |
| SVAPAR-181640 |
All |
HIPER
|
After expanding a volume that is being asynchronously replicated, data written to the recently expanded region of the disk may not get replicated to the remote site if the replication is running in low bandwidth mode. This can lead to an undetected data loss at the DR site.
(show details)
| Symptom |
Data Integrity Loss |
| Environment |
Systems using asynchronous PBR |
| Trigger |
Expansion of a volume that is performing asynchronous replication |
| Workaround |
None |
|
9.1.1.0 |
Policy-based Replication |
| SVAPAR-160911 |
All |
Critical
|
Following an FCM array expansion, the array will temporarily report more physical capacity than expected. If all of the expanded array capacity, plus some of this extra capacity, is occupied by written data, the array will go offline out-of-space.
(show details)
| Symptom |
Loss of Access to Data |
| Environment |
Systems where an FCM array has recently been expanded |
| Trigger |
Expansion of an FCM array, combined with a very high rate of data written to volumes. |
| Workaround |
Following an array expansion, take extra care not to use all of the expanded array capacity. |
|
9.1.1.0 |
RAID |
| SVAPAR-168078 |
All |
Critical
|
IO related to Data Reduction Pools can stall during array rebuild operations resulting in a single node assert. It is possible for this occur repeatedly resulting in a loss of access to data.
(show details)
| Symptom |
Loss of Access to Data |
| Environment |
Systems with data reduction pools running 8.7.x |
| Trigger |
Array rebuild |
| Workaround |
None |
|
9.1.1.0 |
Data Reduction Pools, Distributed RAID |
| SVAPAR-173548 |
All |
Critical
|
Removing and then adding vdisk-host mappings may cause multiple node warmstarts, leading to a loss of access to data.
(show details)
| Symptom |
Loss of Access to Data |
| Environment |
Systems with multiple I/O groups |
| Trigger |
1. Create a mapping from a vdisk to a host in two I/O groups, with multiple SCSI IDs
2. Remove that mapping
3. Map a vdisk to the same host using one of the previously used SCSI IDs. |
| Workaround |
None. |
|
9.1.1.0 |
Hosts |
| SVAPAR-173858 |
All |
Critical
|
Expanding a production volume that is using asynchronous replication may trigger multiple node warmstarts and an outage on the recovery system.
(show details)
| Symptom |
Loss of Access to Data |
| Environment |
System running software version 9.1.0 and using an asynchronous disaster recovery replication policy. |
| Trigger |
Expanding a volume in a volume group with an asynchronous disaster recovery replication policy. |
| Workaround |
None |
|
9.1.1.0 |
Policy-based Replication |
| SVAPAR-175807 |
All |
Critical
|
Multiple node warmstarts may cause loss of access to data after upgrade to 8.7.2 or later, on a system that was once an AuxFar site in a 3-site replication configuration. This is due to invalid FlashCopy configuration state after removal of 3-site replication with HyperSwap or Metro Mirror, and does not apply to 3-site policy-based replication.
(show details)
| Symptom |
Loss of Access to Data |
| Environment |
Any system that was previously an AuxFar site in a 3-site replication configuration. |
| Trigger |
None |
| Workaround |
None |
|
9.1.1.0 |
3-Site using HyperSwap or Metro Mirror |
| SVAPAR-176238 |
All |
Critical
|
A node may go offline with node error 566, due to excessive logging related to DIMM errors.
(show details)
| Symptom |
Loss of Redundancy |
| Environment |
Systems running 9.1.0 software |
| Trigger |
Logging of correctable DIMM errors |
| Workaround |
None |
|
9.1.1.0 |
Reliability Availability Serviceability |
| SVAPAR-177639 |
All |
Critical
|
Deletion of volumes in 3-site (HA+DR) replication may cause multiple node warmstarts. This can only occur if the volume previously used 2-site asynchronous replication, and was then converted to 3-site (HA+DR).
(show details)
| Symptom |
Loss of Access to Data |
| Environment |
FlashSystem 7300 and C200 systems using 3-site replication |
| Trigger |
Deletion of a volume that previously used 2-site asynchronous replication |
| Workaround |
None |
|
9.1.1.0 |
Policy-based High availability, Policy-based Replication |
| SVAPAR-178250 |
All |
Critical
|
Node warmstarts may be triggered by a race condition during NVMe host reset, if the host is using Compare and Write commands. This can cause a loss of access to data.
(show details)
| Symptom |
Loss of Access to Data |
| Environment |
Systems using NVMe hosts |
| Trigger |
None |
| Workaround |
None |
|
9.1.1.0 |
NVMe Hosts |
| SVAPAR-178402 |
All |
Critical
|
Multiple node warmstarts may occur when there are a high number of errors on the fibre channel network.
(show details)
| Symptom |
Multiple Node Warmstarts |
| Environment |
All systems using fibre channel connectivity. |
| Trigger |
A high number errors on the fibre channel network. |
| Workaround |
None |
|
9.1.1.0 |
Fibre Channel |
| SVAPAR-179030 |
All |
Critical
|
The CIMOM configuration interface is no longer supported in 9.1.0. Attempting to manually restart the cimserver service may cause a node warmstart, and loss of configuration access.
(show details)
| Symptom |
Configuration |
| Environment |
Systems running 9.1.0 |
| Trigger |
Attempting to restart the cimserver service |
| Workaround |
None |
|
9.1.1.0 |
Command Line Interface |
| SVAPAR-179930 |
All |
Critical
|
Node warmstarts when backend IO is active on a fibre channel login that experiences a logout on code level 9.1.0.0 or 9.1.0.1.
(show details)
| Symptom |
Multiple Node Warmstarts |
| Environment |
Fibre channel logins between nodes and backend storage or fibre channel logins between nodes.
Code level 9.1.0.0 or 9.1.0.1. |
| Trigger |
None |
| Workaround |
None |
|
9.1.1.0 |
Backend Storage, Fibre Channel |
| SVAPAR-163074 |
All |
High Importance
|
A single node restart may occur if the connectivity between systems in a policy-based replication partnership is unstable.
(show details)
| Symptom |
Single Node Warmstart |
| Environment |
Systems in a policy-based replication partnership. |
| Trigger |
Frequent disconnections between systems in a policy-based replication partnership. |
| Workaround |
Fix any problem in the fabric/network that leads to the disconnections. |
|
9.1.1.0 |
Policy-based Replication |
| SVAPAR-169255 |
All |
High Importance
|
High peak response times when adding snapshots for a volume group containing mirrored vdisks.
(show details)
| Symptom |
Performance |
| Environment |
Stretched cluster/mirrored vdisks and the volume group snapshot feature. |
| Trigger |
Add a snapshot for a volume group containing mirrored vdisks. |
| Workaround |
None |
|
9.1.1.0 |
Performance, Snapshots, Volume Mirroring |
| SVAPAR-170511 |
All |
High Importance
|
Node warmstarts caused by a race condition during NVMe host reset, if the host is using Compare and Write commands
(show details)
| Symptom |
Multiple Node Warmstarts |
| Environment |
Systems using NVMe hosts |
| Trigger |
None |
| Workaround |
None |
|
9.1.1.0 |
NVMe Hosts |
| SVAPAR-172540 |
All |
High Importance
|
Single node warmstart after loss of connection to a remote cluster when using secured IP partnerships
(show details)
| Symptom |
Single Node Warmstart |
| Environment |
Systems using Secured IP partnerships for replication between systems |
| Trigger |
Link connection failure |
| Workaround |
None |
|
9.1.1.0 |
IP Replication |
| SVAPAR-172572 |
All |
High Importance
|
Node warmstart after a host submits a persistent reserve command, in a small timing window immediately after mapping a PBHA volume to the host
(show details)
| Symptom |
Single Node Warmstart |
| Environment |
Hosts using persistent reserve commands in an environment using Policy-based High Availability (PBHA) |
| Trigger |
Mapping a volume to a host and the host sending a persistent reserve command immediately |
| Workaround |
Stop the partnership before mapping volumes to hosts |
|
9.1.1.0 |
Policy-based High availability |
| SVAPAR-177127 |
All |
High Importance
|
The system may fail to install an externally signed system certificate via the GUI.
(show details)
| Symptom |
Configuration |
| Environment |
All systems using system software version 9.1.0.0 or later |
| Trigger |
Installing an externally signed system certificate. |
| Workaround |
Use the CLI to install the certificate. |
|
9.1.1.0 |
Encryption, Graphical User Interface |
| SVAPAR-178257 |
All |
High Importance
|
During OpenShift version upgrade, when the last IQN is removed from a host object, this incorrectly causes the portset to be reset to the default value. This can cause a loss of access.
(show details)
| Symptom |
Loss of Access to Data |
| Environment |
Systems with OpenShift hosts |
| Trigger |
None |
| Workaround |
None |
|
9.1.1.0 |
|
| SVAPAR-178258 & SVAPAR-178262 |
All |
High Importance
|
System may experience performance issue when configured with ISCSI host
(show details)
| Symptom |
Performance |
| Environment |
System using ISCSI Hosts. |
| Trigger |
None |
| Workaround |
None |
|
9.1.1.0 |
Performance, iSCSI |
| SVAPAR-178667 |
FS5200, FS7200, FS7300, FS9100, FS9200, FS9500, SVC |
High Importance
|
Node warmstarts caused by hung NVMe Compare and Write commands.
(show details)
| Symptom |
Multiple Node Warmstarts |
| Environment |
Systems using NVMe hosts, especially VMWare hosts. |
| Trigger |
Failed transfer during Compare phase. |
| Workaround |
None. |
|
9.1.1.0 |
NVMe Hosts |
| SVAPAR-179128 |
All |
High Importance
|
A single-node warmstart may occur on a system using policy-based replication or HA, due to a timing window triggered by disconnection of a partnership.
(show details)
| Symptom |
Single Node Warmstart |
| Environment |
Systems using policy-based replication or policy-based High Availability |
| Trigger |
Disconnection of a partnership while messages are in flight between the two systems |
| Workaround |
None |
|
9.1.1.0 |
Policy-based Replication |
| SVAPAR-179296 |
FS5000 |
High Importance
|
In 9.1.0.0 and 9.1.0.1, the "chvolume -size" command has no effect on FS5015 and FS5035. This prevents GUI volume resizing from working correctly.
(show details)
| Symptom |
Configuration |
| Environment |
IBM Storage FlashSystem 5015 and 5035. |
| Trigger |
Resizing a volume using the "chvolume -size" CLI command or the GUI |
| Workaround |
Use the expandvdisksize command to expand the provisioned capacity of a volume by a specified amount. |
|
9.1.1.0 |
|
| SVAPAR-139491 |
All |
Suggested
|
VMWare hosts attached via NVMe may log errors related to opcode 0x5
(show details)
| Symptom |
Configuration |
| Environment |
Systems with NVMe hosts. |
| Trigger |
None. |
| Workaround |
None. |
|
9.1.1.0 |
NVMe |
| SVAPAR-167695 |
All |
Suggested
|
If two-person integrity (TPI) is enabled, an LDAP user that is in multiple remote groups may not be able to remove safeguarded snapshots, even if a role elevation request has been approved.
(show details)
| Symptom |
Configuration |
| Environment |
TPI enabled with an LDAP user that is in multiple groups containing SystemAdmin and non-SystemAdmin roles. |
| Trigger |
Attempt to delete a safeguarded snapshot after elevating its role. |
| Workaround |
Either remove the user from the non-security administrator group on the LDAP server, or delete and recreate the non-security administrator group on the FlashSystem (so that the security administrator group has a lower ID). |
|
9.1.1.0 |
Safeguarded Copy & Safeguarded Snapshots |
| SVAPAR-170958 |
All |
Suggested
|
Policy-based asynchronous replication may not correctly balance the available bandwidth between nodes after a node goes offline, potentially causing a degradation of the recovery point
(show details)
| Symptom |
Performance |
| Environment |
Systems using asynchronous policy-based replication |
| Trigger |
Node goes offline |
| Workaround |
Warmstart affected nodes |
|
9.1.1.0 |
Policy-based Replication |
| SVAPAR-172478 |
All |
Suggested
|
Systems running 9.1.0.0 may incorrectly report 1585 "Could not connect to DNS server" errors
(show details)
| Symptom |
Error in Error Log |
| Environment |
System with DNS configured |
| Trigger |
None |
| Workaround |
Register the cluster IP address in all configured DNS servers |
|
9.1.1.0 |
No Specific Feature |
| SVAPAR-172966 |
All |
Suggested
|
The real capacity of a thin-provisioned volume in a standard pool cannot be shrunk if the new real capacity is not a multiple of the grain size
(show details)
| Symptom |
Configuration |
| Environment |
Systems using thin-provisioned volumes in standard pools |
| Trigger |
Shrinking a volume if the final real capacity is not a multiple of the grain size |
| Workaround |
Adjust the shrink request, to ensure the final real capacity is a multiple of the grain size |
|
9.1.1.0 |
Thin Provisioning |
| SVAPAR-173310 |
All |
Suggested
|
If both nodes in an IO Group go down unexpectedly, invalid snapshots may remain in the system which cannot be removed
(show details)
| Symptom |
Configuration |
| Environment |
Systems using volume group snapshots. |
| Trigger |
Both nodes in an IO Group go down unexpectedly. |
| Workaround |
None. |
|
9.1.1.0 |
Snapshots |
| SVAPAR-175855 |
All |
Suggested
|
A new volume may incorrectly show a source_volume_name and source_volume_id, when it inherits the vdisk ID of a deleted clone volume.
(show details)
| Symptom |
Configuration |
| Environment |
Systems using volume group snapshots and clones |
| Trigger |
Create a snapshot of a VG.
Create a clone from the snapshot.
Delete the clone.
Create a new volume. |
| Workaround |
None. |
|
9.1.1.0 |
Snapshots |
| SVAPAR-177359 |
All |
Suggested
|
Users may need to log out of iSCSI sessions individually, as simultaneous logout is not supported.
(show details)
| Symptom |
None |
| Environment |
iSCSI hosts |
| Trigger |
iSCSI logout processing |
| Workaround |
None |
|
9.1.1.0 |
iSCSI |
| SVAPAR-177771 |
All |
Suggested
|
Encryption with internal key management may be unable to perform the scheduled daily re-key of the internal key. The event log will show a daily repeating information event when this occurs. The current internal recovery key will continue to function.
(show details)
| Symptom |
Configuration |
| Environment |
Systems using software version 9.1.0.x and encryption with internal key management. |
| Trigger |
A node reboot or warmstart with the affected configuration. |
| Workaround |
None |
|
9.1.1.0 |
Encryption |
| SVAPAR-178208 |
All |
Suggested
|
A race condition in I/O processing for NVMe over RDMA/TCP hosts may lead to a single node warmstart.
(show details)
| Symptom |
Single Node Warmstart |
| Environment |
Systems with NVMe over RDMA or TCP hosts attached. |
| Trigger |
None |
| Workaround |
None |
|
9.1.1.0 |
NVMe Hosts |
| SVAPAR-178320 |
All |
Suggested
|
When an invalid subject alternative name is entered for a mksystemcertstore command, the system returns CMMVC5786E The action failed because the cluster is not in a stable state.
(show details)
| Symptom |
Configuration |
| Environment |
All systems at 9.1.0 or higher. |
| Trigger |
Running the mksystemcertstore command with an invalid subject alternative name value. |
| Workaround |
None |
|
9.1.1.0 |
Encryption |
| SVAPAR-178323 |
All |
Suggested
|
The system may attempt to authenticate a LDAP user who is not in any remote user group with a null password.
(show details)
| Symptom |
Configuration |
| Environment |
Systems using LDAP authentication. |
| Trigger |
Attempting to sign into the system with a user who is not in any remote user group. |
| Workaround |
None |
|
9.1.1.0 |
LDAP |
| SVAPAR-178807 |
All |
Suggested
|
A single node may warmstart if a volume group being replicated with PBR (Policy-Based Replication) is deleted during its initial synchronization.
(show details)
| Symptom |
Single Node Warmstart |
| Environment |
Systems with PBR (Policy-Based Replication) configured. |
| Trigger |
Volume group deletion. |
| Workaround |
None |
|
9.1.1.0 |
Policy-based Replication |
| SVAPAR-179086 |
All |
Suggested
|
Ransomware threat detection process does not always send an alert when a threat is detected.
(show details)
| Symptom |
Configuration |
| Environment |
System with Ransomware threat detection feature |
| Trigger |
None |
| Workaround |
None |
|
9.1.1.0 |
System Monitoring |
| SVAPAR-179184 |
All |
Suggested
|
GUI allows DR linking to a partner system which does not support DR linking
(show details)
| Symptom |
Configuration |
| Environment |
Systems using 3-site HA+DR replication |
| Trigger |
Attempting creation of a DR link to a system whose software does not support 3-site HA+DR replication. |
| Workaround |
NA |
|
9.1.1.0 |
Graphical User Interface, Policy-based Replication, Storage Partitions |
| SVAPAR-179196 |
All |
Suggested
|
When partnership creation is attempted using the GUI, for a remote system which already has a partnership, an error is produced but a new truststore is incorrectly created.
(show details)
| Symptom |
Configuration |
| Environment |
Systems using policy-based replication or HA |
| Trigger |
Attempt to create PBR based partnership using GUI and such partnership already exists |
| Workaround |
Manually remove the new truststore |
|
9.1.1.0 |
Graphical User Interface, Policy-based Replication |
| SVAPAR-179812 |
All |
Suggested
|
Event ID 86014 (encryption recovery key is not configured) is shown with the message for event ID 86015 (internal key management rekey failed), and vice-versa.
(show details)
| Symptom |
Error in Error Log |
| Environment |
Internal Key Management |
| Trigger |
Failure of rekey operation for internal key |
| Workaround |
NA |
|
9.1.1.0 |
Encryption |
| SVAPAR-179874 |
All |
Suggested
|
GUI displays old partition name after renaming.
(show details)
| Symptom |
Configuration |
| Environment |
This affects GUI's volume groups view, which continues to display the previous partition name (for volume groups associated with the renamed partition). |
| Trigger |
Renaming partition |
| Workaround |
None |
|
9.1.1.0 |
Graphical User Interface |