SVAPAR-164082 |
All |
HIPER
|
When configuring 3-site replication by adding a policy-based HA copy to an existing volume that already has DR configured, any writes that arrive during a small timing window will not be mirrored to the new HA copy, causing an undetected data corruption.
(show details)
Symptom |
Data Integrity Loss |
Environment |
Systems running 8.7.3 and higher |
Trigger |
Adding a HA copy to an existing volume which is already configured with asynchronous replication |
Workaround |
None |
|
9.1.0.0 |
Policy-based High availability, Policy-based Replication |
SVAPAR-170657 |
All |
HIPER
|
A problem with NVMe drives on FlashSystem 9500 may impact node to node communication over the PCIe bus. This may lead to a temporary array offline.
SVAPAR-134589 previously addressed the same issue, but that fix was found to be incomplete.
(show details)
Symptom |
Loss of Access to Data |
Environment |
No Value |
Trigger |
NVMe drive failure |
Workaround |
None |
|
9.1.0.0 |
Drives |
SVAPAR-143997 |
All |
Critical
|
A single node warmstart may occur when the upper cache reaches 100% full while the partner node in the I/O group is offline
(show details)
Symptom |
Loss of Access to Data |
Environment |
None |
Trigger |
Partner node is offline |
Workaround |
None |
|
9.1.0.0 |
Reliability Availability Serviceability |
SVAPAR-144389 |
SVC |
Critical
|
In an SVC stretched cluster, adding a second vdisk copy to a PBR-enabled volume using the GUI does not automatically add a copy to the change volume. This can cause subsequent vdisk migration requests to fail.
(show details)
Symptom |
Configuration |
Environment |
SVC stretched cluster with PBR |
Trigger |
Adding a vdisk copy to a PBR-enabled volume in an SVC stretched cluster |
Workaround |
Manually add the copy to the change volume with the CLI command "addvdiskcopy -mdiskgrp <mdiskgrp_id> <change_volume_vdisk_id>". The -rsize parameter is not needed to make the change volume thin-privisioned, as this is implicit for PBR change volumes. |
|
9.1.0.0 |
Policy-based Replication |
SVAPAR-148987 |
SVC |
Critical
|
SVC model SV1 nodes running 8.5.0.13 may be unable to access keys from USB sticks when using USB encryption
(show details)
Symptom |
Loss of Access to Data |
Environment |
2145-SV1 or 2147-SV1 nodes running 8.5.0.13 with USB encryption enabled. |
Trigger |
None |
Workaround |
Encryption key servers are unaffected by this issue, and can be used instead of USB encryption |
|
9.1.0.0 |
Encryption |
SVAPAR-150198 |
All |
Critical
|
Multiple node warmstarts (causing loss of access to data) may occur on 8.7.1 and 8.7.2 software, when deleting a volume that is in a volume group, and previously received a persistent reservation request.
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems where volumes in a volume group received persistent reservation requests from hosts, while running pre-8.7.1 software. |
Trigger |
Deletion of a volume that is in a volume group |
Workaround |
Do not delete any volumes that are inside volume groups prior to upgrading to the fix for this APAR. |
|
9.1.0.0 |
Policy-based High availability, Policy-based Replication, Snapshots |
SVAPAR-150433 |
All |
Critical
|
In certain policy-based 3-site replication configurations, a loss of connectivity between HA systems may cause I/O timeouts and loss of access to data.
(show details)
Symptom |
Loss of Access to Data |
Environment |
Policy-based 3-site replication, where HA was added to an existing asynchronous replication configuration. If asynchronous replication was added to an existing HA configuration, the system is not exposed. |
Trigger |
Loss of connectivity between HA systems. |
Workaround |
None |
|
9.1.0.0 |
Policy-based High availability, Policy-based Replication |
SVAPAR-150764 |
All |
Critical
|
At 8.7.2.0, loss of access to vVols may occur after node failover if the host rebind operation fails.
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using vVols with 8.7.2.0 |
Trigger |
Node failover |
Workaround |
None |
|
9.1.0.0 |
vVols |
SVAPAR-153584 |
All |
Critical
|
A node warmstart may occur when a system using policy-based high availability loses connectivity to the remote system. In rare instances, both nodes in the system can warmstart at the same time. This is due to an inter-node messaging timing window.
(show details)
Symptom |
Loss of Access to Data |
Environment |
Policy-based High Availability with 8.7.1 or earlier software |
Trigger |
Loss of connectivity between PBHA systems |
Workaround |
None |
|
9.1.0.0 |
Policy-based High availability |
SVAPAR-155395 |
All |
Critical
|
Hardware failure of a node at the exact moment that a volume is being created can result in an invalid cache state. If I/O is received by that volume before the failed node recovers, node warmstarts may cause loss of access to data.
(show details)
Symptom |
Loss of Access to Data |
Environment |
None |
Trigger |
Node hardware failure at the exact time that a volume is being created |
Workaround |
None |
|
9.1.0.0 |
Cache |
SVAPAR-155437 |
All |
Critical
|
When enabling replication for a volume group, there is a very low probability that the DR system might detect an invalid state, due to a timing window between creation of the volume group and the volumes. If this happens, both nodes at the DR system might warmstart at the same time.
(show details)
Symptom |
Loss of Access to Data |
Environment |
None |
Trigger |
Addition of a replication policy to a volume group |
Workaround |
None |
|
9.1.0.0 |
Policy-based Replication |
SVAPAR-155656 |
All |
Critical
|
Multiple node asserts when removing a VDisk copy (or adding a copy with the autodelete parameter) from a policy-based replication recovery volume
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Policy-based replication recovery system |
Trigger |
Removing a VDisk copy from a replication recovery volume, or adding a copy with the autodelete parameter |
Workaround |
Do not remove VDisk copies and do not add VDisk copies with the autodelete parameter on replication recovery volumes |
|
9.1.0.0 |
Policy-based High availability, Policy-based Replication |
SVAPAR-155697 |
All |
Critical
|
Loss of access to data caused by a partial failure of an internal PCI express bus
(show details)
Symptom |
Loss of Access to Data |
Environment |
None |
Trigger |
None |
Workaround |
None |
|
9.1.0.0 |
Reliability Availability Serviceability |
SVAPAR-156976 |
All |
Critical
|
Volumes in a data reduction pool with deduplication enabled may be taken offline due to a metadata inconsistency
(show details)
Symptom |
Offline Volumes |
Environment |
Systems using DRP with Deduplication |
Trigger |
None |
Workaround |
None |
|
9.1.0.0 |
Data Reduction Pools |
SVAPAR-157355 |
All |
Critical
|
Multiple node warmstarts under very heavy work-loads from NVMe over FC hosts.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems with NVMe over FC hosts |
Trigger |
Heavy workloads |
Workaround |
None |
|
9.1.0.0 |
NVMe Hosts |
SVAPAR-160242 |
All |
Critical
|
If an FCM array is offline due to an out-of-space condition during array expansion, and a T2 recovery takes place, the recovery may fail, resulting in both nodes being offline with node error 564.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems with FCM arrays |
Trigger |
T2 recovery |
Workaround |
None |
|
9.1.0.0 |
RAID |
SVAPAR-161932 |
All |
Critical
|
Creating 20480 thin-provisioned volumes may cause multiple node warmstarts and loss of access to data, due to an invalid internal count of the number of thin-provisioned volumes.
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using a large number of thin-provisioned volumes |
Trigger |
None |
Workaround |
None |
|
9.1.0.0 |
Thin Provisioning |
SVAPAR-162836 |
All |
Critical
|
Node warmstarts when starting a Volume Group Snapshot if there are multiple legacy FlashCopy maps the same target volume.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Volume Group Snapshots and legacy FlashCopy maps on the volumes |
Trigger |
If the source vdisks of a legacy flashcopy map is in a volume group, and that fcmap is sharing a target with another legacy fcmap, this can cause a scenario where a timeout can occur and cause the nodes to warmstart. |
Workaround |
Delete any legacy FlashCopy maps that share a target with another legacy FlashCopy map.
|
|
9.1.0.0 |
FlashCopy, Snapshots |
SVAPAR-164206 |
All |
Critical
|
A short loss of access can occur due to a cluster warmstart when deleting a volume with a persistent reservation on systems protected by policy-based high availability.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running 8.7.2 or higher and using policy-based high availability. |
Trigger |
Deletion of a volume which has a persistent reservation |
Workaround |
None |
|
9.1.0.0 |
Policy-based High availability |
SVAPAR-164430 |
All |
Critical
|
Repeated single node warmstarts may occur in a 3-site policy-based HA+DR configuration, due to a timing window during an HA failover scenario.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
A policy-based high availability replication with asynchronous disaster recovery (3-site replication). |
Trigger |
None. |
Workaround |
Disable the link to the disaster recovery system. |
|
9.1.0.0 |
Policy-based High availability, Policy-based Replication |
SVAPAR-167390 |
All |
Critical
|
A T2 recovery may occur when manually adding a node to the system while an upgrade with the -pause flag is in progress
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems that are in the process of being upgraded with the -pause flag |
Trigger |
Adding a node to the system |
Workaround |
Abort the upgrade |
|
9.1.0.0 |
System Update |
SVAPAR-167865 |
All |
Critical
|
Systems with heavy iSCSI I/O workload may show a decrease in performance after upgrade to 8.7.0 or later
(show details)
Symptom |
Performance |
Environment |
Systems with iSCSI hosts |
Trigger |
Heavy iSCSI host workload |
Workaround |
None |
|
9.1.0.0 |
Performance, iSCSI |
SVAPAR-170351 |
All |
Critical
|
A loss of access to data may occur upon an attempt to use 'charraymember' command to replace the drive while its rebuild is in progress.
(show details)
Symptom |
Loss of Access to Data |
Environment |
A degraded RAID-6 array with drive undergoing rebuild. |
Trigger |
Using the charraymember -member X -newdrive Y <Array Name> command on a member drive whose rebuild is in progress. |
Workaround |
Do not trigger replacement using the charraymember command on a member drive whose rebuild is in progress. |
|
9.1.0.0 |
RAID |
SVAPAR-170429 |
All |
Critical
|
Policy-based replication or HA may suspend after upgrade to 8.7.0.5, due to a change volume on the recovery system being in an invalid state.
(show details)
Symptom |
None |
Environment |
Clusters in 8.7.0.5 running PBR |
Trigger |
None |
Workaround |
Contact support for assistance with restarting replication. |
|
9.1.0.0 |
Policy-based Replication |
HU00556 |
FS5000, SVC |
High Importance
|
A partner node may go offline when a node in the IO group is removed due to a space efficient timing window.
(show details)
Symptom |
Single Node Warmstart |
Environment |
When a system is in the process of destaging metadata of a space-efficient VDisk and the owner node stops. |
Trigger |
Partner node goes offline whilst a node is destaging metadata of a space-efficient VDisk |
Workaround |
For SVC environments, spare nodes can prevent this issue. For planned node maintenance, workarounds are to to use 'rmnode' to remove the node from the cluster before taking it offline, or use 'movevdisk' to move the preferred node to the partner node for all space-efficient VDisks before taking a node offline |
|
9.1.0.0 |
Thin Provisioning |
HU02293 |
All |
High Importance
|
MDisk Groups can go offline due to overall timeout if the backend storage is configured incorrectly after a hot spare node comes online
(show details)
Symptom |
Offline Volumes |
Environment |
Environments with hot spare nodes and also have backend storage that is configured incorrectly |
Trigger |
Hot spare node coming online |
Workaround |
Correctly map the MDisks to the spare nodes |
|
9.1.0.0 |
Hot Spare Node |
HU02493 |
SVC |
High Importance
|
On certain controllers that have more then 511 LUNS configured, then mdisks may go offline
(show details)
Symptom |
Offline Volumes |
Environment |
Any system that has more than 511 LUNS |
Trigger |
More then 511 LUNS |
Workaround |
This problem can be resolved by reducing the LUN count to 511 or below on any effected controller. |
|
9.1.0.0 |
Backend Storage |
SVAPAR-120649 |
All |
High Importance
|
Node warmstart triggered by a small timing window when temporarily pausing IO in response to a configuration change.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using RAID arrays |
Trigger |
None |
Workaround |
None |
|
9.1.0.0 |
Distributed RAID, RAID |
SVAPAR-137361 |
All |
High Importance
|
A battery may incorrectly enter a failed state, if input power is removed within a small timing window
(show details)
Symptom |
Loss of Redundancy |
Environment |
None |
Trigger |
Removal of input power at the same time a battery power test is in progress |
Workaround |
Unplug the battery from the node canister and leave it unplugged for at least 10 minutes. Then re-install the battery into the canister.
|
|
9.1.0.0 |
Reliability Availability Serviceability |
SVAPAR-142940 |
All |
High Importance
|
IO processing unnecessarily stalled for several seconds following a node coming online
(show details)
Symptom |
Performance |
Environment |
Systems that have performed an NVMe drive firmware upgrade in the past. Systems with a syslog server configured have a larger impact. |
Trigger |
A node coming online |
Workaround |
Contact IBM support. |
|
9.1.0.0 |
Performance |
SVAPAR-144069 |
All |
High Importance
|
On a system with SAS drives, if a node canister is replaced while an unsupported drive is in the enclosure, all nodes may warmstart simultaneously, causing a loss of access to data.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
SAS systems with an unsupported drive |
Trigger |
Node canister replacement |
Workaround |
Remove the unsupported drive from the enclosure to stabilise the system |
|
9.1.0.0 |
Drives |
SVAPAR-145278 |
All |
High Importance
|
Upgrade from 8.7.0 to 8.7.1 may cause to an invalid internal state, if policy-based replication is in use. This may lead to node warmstarts on the recovery system, or cause replication to stop.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems using policy-based replication |
Trigger |
Upgrade from 8.7.0 to 8.7.1 |
Workaround |
Make all recovery volume groups independent before upgrading from 8.7.0 to 8.7.1. |
|
9.1.0.0 |
Policy-based Replication |
SVAPAR-146097 |
All |
High Importance
|
On systems running 8.7.0 or 8.7.1 software with NVMe drives, at times of particularly high workload, there is a low probability of a single-node warmstart.
|
9.1.0.0 |
Drives |
SVAPAR-147223 |
All |
High Importance
|
System does not notify hosts of ALUA changes prior to hot spare node failback. This may prevent host path failover, leading to loss of access to data.
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems with hot spare nodes |
Trigger |
Hot spare node failback |
Workaround |
None |
|
9.1.0.0 |
Hot Spare Node |
SVAPAR-147647 |
All |
High Importance
|
A 32Gb Fibre Channel adapter may unexpectedly reset, causing a delay in communication via that adapter's ports.
(show details)
Symptom |
None |
Environment |
Systems with 32Gb Fibre Channel adapters. |
Trigger |
None |
Workaround |
None |
|
9.1.0.0 |
Fibre Channel |
SVAPAR-148251 |
All |
High Importance
|
Merging partitions on 8.7.1.0 software may trigger a single-node warmstart.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running 8.7.1.0 software with partitions |
Trigger |
Partition merge |
Workaround |
None |
|
9.1.0.0 |
Policy-based High availability |
SVAPAR-150663 |
All |
High Importance
|
Some FCM3 drives may go offline on upgrade to 8.7.2.0.
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems with FCM3 product ID 101406B2 or 101406B3 |
Trigger |
Upgrade to 8.7.2.0 |
Workaround |
No Value |
|
9.1.0.0 |
Drives |
SVAPAR-152902 |
All |
High Importance
|
If a system is using asynchronous policy-based replication, certain unusual host I/O workloads can cause an I/O timeout to be incorrectly detected, triggering node warmstarts at the recovery site.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems using policy-based replication
|
Trigger |
None |
Workaround |
None |
|
9.1.0.0 |
Policy-based Replication |
SVAPAR-154100 |
All |
High Importance
|
A node warmstart may occur to clear the condition when Fibre Channel adapter firmware has started processing a target I/O request, but has failed the request with status "Invalid Receive Exchange Address".
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems with Fibre Channel adapters |
Trigger |
None |
Workaround |
None |
|
9.1.0.0 |
Fibre Channel |
SVAPAR-154387 |
All |
High Importance
|
Running multiple supportload commands in quick succession may cause an out of memory condition, which leads to a node warmstart.
(show details)
Symptom |
Single Node Warmstart |
Environment |
CLI |
Trigger |
Supportload command |
Workaround |
Do not run multiple supportupload commands in quick succession if they fail with:
CMMVC8092E An error occurred in communicating with the remote server. |
|
9.1.0.0 |
Command Line Interface |
SVAPAR-154399 |
All |
High Importance
|
Policy-based high availability may be suspended and unable to restart after an upgrade to 8.7.2.x, due to a timing window.
(show details)
Symptom |
Loss of Redundancy |
Environment |
System using Policy-based high availability |
Trigger |
Upgrading to 8.7.2.x |
Workaround |
None |
|
9.1.0.0 |
Policy-based High availability |
SVAPAR-154963 |
All |
High Importance
|
On systems with 8.7.2 software, a single node assert may occur due to a race condition when deleting volumes, hosts or host mappings that are part of a policy-based high availability partition.
(show details)
Symptom |
Single Node Warmstart |
Environment |
8.7.2 systems using policy-based high availability |
Trigger |
Deleting volumes, hosts or host mappings that are part of a policy-based high availability partition. |
Workaround |
Avoid deleting volumes, hosts or host mappings that are part of a policy-based high availability partition until an upgrade to a version with the fix is done. |
|
9.1.0.0 |
Policy-based High availability |
SVAPAR-157561 |
All |
High Importance
|
Single node warmstart when collecting data from Ethernet SFPs times out.
(show details)
Symptom |
Single Node Warmstart |
Environment |
None |
Trigger |
None. |
Workaround |
None. |
|
9.1.0.0 |
IP Replication, iSCSI |
SVAPAR-158915 |
All |
High Importance
|
A single node warmstart may occur if a host issues more than 65535 write requests, to a single 128KB region of a policy-based replication or HA volume, within a short period of time.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using policy-based replication replication or HA with 8.7.1 or later software. |
Trigger |
More than 65535 write requests to a single 128KB region of a volume |
Workaround |
None |
|
9.1.0.0 |
Policy-based High availability, Policy-based Replication |
SVAPAR-159284 |
All |
High Importance
|
Multiple node asserts may occur on a system using IP Replication, if a low-probability timing window results in a node receiving an IP replication login from itself.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems using IP Replication. |
Trigger |
None |
Workaround |
None |
|
9.1.0.0 |
IP Replication |
SVAPAR-159430 |
All |
High Importance
|
Multiple node warmstarts may occur if a PBHA volume receives more than 100 persistent reserve registrations from hosts.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Policy Based High Availability with hosts using persistent reserve |
Trigger |
SCSI reserve used on a PBHA volume, where there are more than 100 paths between that volume and the host. For example, 16 hosts each with 2 ports, where each host port is visible to 4 storage ports, would result in 128 paths to a volume. |
Workaround |
Limit the number of hosts mapped to volume until the SCSI Persistent Reservation registrations count is below 100. |
|
9.1.0.0 |
Policy-based High availability |
SVAPAR-161263 |
All |
High Importance
|
Systems with NVMe hosts may experience multiple node warmstarts on 8.7.3.x software
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems with NVMe hosts |
Trigger |
None |
Workaround |
None |
|
9.1.0.0 |
Hosts |
SVAPAR-164368 |
All |
High Importance
|
Recurring node warmstarts within an IO group after starting DRP recovery.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Data Reduction Pools |
Trigger |
repairpool -runrecovery |
Workaround |
None |
|
9.1.0.0 |
Data Reduction Pools |
SVAPAR-164839 |
All |
High Importance
|
A node containing iWARP adapters may fail to reboot.
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems with iWARP adapters. |
Trigger |
Rebooting a node canister. |
Workaround |
None |
|
9.1.0.0 |
Reliability Availability Serviceability |
SVAPAR-165194 |
All |
High Importance
|
A node warmstart may occur when a host issues a persistent reserve command to an HA volume, and another I/O is received at the exact time the persistent reserve command completes.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Policy-based high availability with persistant reserves. |
Trigger |
None |
Workaround |
None |
|
9.1.0.0 |
Policy-based High availability |
SVAPAR-166971 |
All |
High Importance
|
Unable to change the replication mode of a volume group due to invalid internal state.
(show details)
Symptom |
Error in Error Log |
Environment |
Policy-based replication. |
Trigger |
Change the replication mode of a volume group to independent and then attempt to change it back to production. |
Workaround |
Work with IBM support and use manual commands to fix the invalid internal state. |
|
9.1.0.0 |
Policy-based High availability, Policy-based Replication |
SVAPAR-167040 |
All |
High Importance
|
Single node warmstart triggered by making the DR site into an independent copy whilst replication is still active.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running 8.7.1 or higher using policy-based replication |
Trigger |
Making the DR site independent before stopping replication. For example a DR testing procedure. |
Workaround |
Stop DR replication before making the DR site independent |
|
9.1.0.0 |
Policy-based Replication |
SVAPAR-168411 |
All |
High Importance
|
A single-node warmstart caused by the IO statistics processing exceeding a timeout
(show details)
Symptom |
Single Node Warmstart |
Environment |
None |
Trigger |
None |
Workaround |
None |
|
9.1.0.0 |
Reliability Availability Serviceability |
SVAPAR-169250 |
All |
High Importance
|
Adding a GKLM server at version 5.x may result in an error saying the key server is not supported. This is due to a change in the way that GKLM identifies itself when the storage system connects to it.
(show details)
Symptom |
Configuration |
Environment |
Systems using a GKLM server at version 5.x |
Trigger |
None |
Workaround |
None |
|
9.1.0.0 |
Encryption |
SVAPAR-170340 |
All |
High Importance
|
On systems using PBR/PBHA as well as FlashCopy, a node may warmstart due to a problem during background copy processing in the FlashCopy component.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
System using PBR/PBHA and Flashcopy on pre-9.1.0.0 software |
Trigger |
None |
Workaround |
None |
|
9.1.0.0 |
FlashCopy |
SVAPAR-170368 |
All |
High Importance
|
FS50xx systems may incorrectly report failed batteries (node error 652) after a cold boot.
(show details)
Symptom |
Loss of Access to Data |
Environment |
None |
Trigger |
Cold boot (for example after a planned power down). |
Workaround |
Contact support to reset the failed battery marker. |
|
9.1.0.0 |
Reliability Availability Serviceability |
SVAPAR-170438 |
All |
High Importance
|
After a power outage, FCM drives may fail to come online when power is restored.
(show details)
Symptom |
Loss of Access to Data |
Environment |
System with FCM drives |
Trigger |
Power Outage |
Workaround |
Reseat the drives |
|
9.1.0.0 |
Drives |
SVAPAR-92804 |
FS5000 |
High Importance
|
SAS direct attach host path is not recovered after a node reboot causing a persistent loss of redundant paths.
(show details)
Symptom |
Loss of Redundancy |
Environment |
Mostly seen with ESXi hosts with Lenovo 430-8e SAS/SATA 12Gb HBA (LSI) |
Trigger |
Node reboot, warmstart. |
Workaround |
n/a |
|
9.1.0.0 |
Hosts |
SVAPAR-114758 |
All |
Suggested
|
Following a cluster recovery, the names of some back end storage controllers can be lost resulting in default names such as controller0.
(show details)
Symptom |
Configuration |
Environment |
Any with external storage controllers |
Trigger |
T2 cluster recovery |
Workaround |
None |
|
9.1.0.0 |
Backend Storage |
SVAPAR-130984 |
All |
Suggested
|
Configuring Policy-based replication in the GUI fails if the system authentication service type is unused.
(show details)
Symptom |
Configuration |
Environment |
Systems that previously upgraded from 8.3.0 or lower and used TIP (Tivoli Integrated Portal) authentication. |
Trigger |
Attempting to link pools or create a replication policy in the GUI |
Workaround |
Change authentication service type to LDAP using: 'svctask chauthservice -type ldap' |
|
9.1.0.0 |
Policy-based Replication |
SVAPAR-142190 |
All |
Suggested
|
The user role descriptions in the GUI are wrong for the CopyOperator.
(show details)
Symptom |
None |
Environment |
None |
Trigger |
None |
Workaround |
None |
|
9.1.0.0 |
Graphical User Interface |
SVAPAR-142939 |
FS5000 |
Suggested
|
Upgrade to 8.7.1 on FS5045 with policy-based replication or high availability is not supported.
(show details)
Symptom |
Configuration |
Environment |
FS5045 with PBR or PBHA |
Trigger |
Attempted upgrade to 8.7.1 |
Workaround |
None |
|
9.1.0.0 |
Policy-based Replication |
SVAPAR-144033 |
All |
Suggested
|
Spurious 1370 events against SAS drives which are not members of an array.
(show details)
Symptom |
Error in Error Log |
Environment |
Systems with SAS drives. |
Trigger |
None |
Workaround |
None |
|
9.1.0.0 |
Reliability Availability Serviceability |
SVAPAR-144062 |
All |
Suggested
|
A node may warmstart due to a problem with IO buffer management in the cache component.
(show details)
Symptom |
Single Node Warmstart |
Environment |
None |
Trigger |
None |
Workaround |
None |
|
9.1.0.0 |
Cache |
SVAPAR-144515 |
All |
Suggested
|
When trying to increase FlashCopy or volume mirroring bitmap memory, the GUI may incorrectly report that the new value exceeds combined memory limits.
(show details)
Symptom |
Configuration |
Environment |
8.7.0 or 8.7.1 systems using FlashCopy or volume mirroring |
Trigger |
Trying to increase bitmap space for FC / MR |
Workaround |
Use CLI to increase bitmap memory |
|
9.1.0.0 |
FlashCopy, Volume Mirroring |
SVAPAR-145892 |
All |
Suggested
|
An unfixed error might display an incorrect fixed timestamp
(show details)
Symptom |
Error in Error Log |
Environment |
Systems with an unfixed error in the eventlog |
Trigger |
A new unfixed error is logged and it re-uses an eventlog entry of an old fixed error |
Workaround |
Clear the error log |
|
9.1.0.0 |
Reliability Availability Serviceability |
SVAPAR-146640 |
All |
Suggested
|
When volume latency increases from below 1ms to above 1ms, the units in the GUI performance monitor will not be incorrect.
(show details)
Symptom |
None |
Environment |
Systems running 8.7.x |
Trigger |
Volume latency increasing from below 1ms to above 1ms |
Workaround |
Refresh the GUI cache |
|
9.1.0.0 |
Graphical User Interface |
SVAPAR-148606 |
All |
Suggested
|
Storage Insights log collection may fail with the message "another wrapper is running:". Sometimes this prevents future log upload requests from starting.
(show details)
Symptom |
Configuration |
Environment |
Systems using Cloud Callhome |
Trigger |
Collecting a snap remotely via Cloud Callhome |
Workaround |
None |
|
9.1.0.0 |
Call Home |
SVAPAR-152472 |
FS5000 |
Suggested
|
Volume group details view in the GUI might show a blank page for FlashSystem 5045 systems running 8.7.1
(show details)
Symptom |
Configuration |
Environment |
FlashSystem 5045 using volume groups. |
Trigger |
None |
Workaround |
None |
|
9.1.0.0 |
Graphical User Interface |
SVAPAR-153310 |
All |
Suggested
|
Adding an HA policy to a partition may fail with a CMMVC1249E error. This will only happen if a DR partition in a 3-site configuration is deleted, a new partition is created with the same ID, and the user attempts to add an HA policy to that partition.
(show details)
Symptom |
Configuration |
Environment |
Systems using policy-based high availability with DR |
Trigger |
Attempt to create an HA partition after deletion of a DR partition. |
Workaround |
No Value |
|
9.1.0.0 |
Policy-based High availability, Policy-based Replication |
SVAPAR-156142 |
All |
Suggested
|
Persistent reservation requests for volumes configured with Policy-based High Availability might be rejected during a small timing window after a node comes online
(show details)
Symptom |
None |
Environment |
Systems with Policy-based High Availability and hosts using persistent reservations |
Trigger |
Hosts sending persistent reservation requests during a small timing window after a node comes online |
Workaround |
None |
|
9.1.0.0 |
Policy-based High availability |
SVAPAR-156146 |
All |
Suggested
|
The GUI's encryption panel displays "Encryption is not fully enabled" when encryption is enabled but the encryption recovery key has not been configured.
(show details)
Symptom |
Configuration |
Environment |
Systems running 8.6.2.0 or higher and using encryption. |
Trigger |
Enabling encryption and not configuring an encryption recovery key. |
Workaround |
None |
|
9.1.0.0 |
Encryption, Graphical User Interface |
SVAPAR-157164 |
All |
Suggested
|
Removing a volume group with a 3-site disaster recovery link may cause the volume group to have a state that prevents configuring it for 2-site asynchronous disaster recovery in the future
(show details)
Symptom |
Configuration |
Environment |
Systems using policy-based replication |
Trigger |
Removing a volume group that was configured for 3-site disaster recovery, then recreating the volume group and trying to configure 2-site disaster recovery |
Workaround |
None |
|
9.1.0.0 |
Policy-based High availability, Policy-based Replication |
SVAPAR-157700 |
All |
Suggested
|
Systems on 8.7.3 may be unable to establish a partnership for policy-based replication or high availability, with systems on lower code levels that have volume group snapshots.
(show details)
Symptom |
Configuration |
Environment |
One system running 8.7.3 and the partner system on a lower code level with volume group snapshots. |
Trigger |
Trying to create a partnership for policy-based replication or high availability. |
Workaround |
Upgrade all the systems to 8.7.3 or remove the volume group snapshots from the systems that are in the lower code level. |
|
9.1.0.0 |
Policy-based High availability, Policy-based Replication |
SVAPAR-159867 |
FS5000, FS7200, FS7300 |
Suggested
|
A successful drive firmware update can report a 3090 event indicating that the update has failed. This is caused by some types of SAS drives taking longer to update.
(show details)
Symptom |
Error in Error Log |
Environment |
SAS drives |
Trigger |
Drive firmware update |
Workaround |
This is a cosmetic error, the upgrade does succeed when the event is reported. |
|
9.1.0.0 |
Drives |
SVAPAR-161517 |
All |
Suggested
|
Exported trust store file cannot be read
(show details)
Symptom |
Configuration |
Environment |
Systems with a trust store |
Trigger |
Exporting a trust store using the GUI or svctask chtruststore -export |
Workaround |
None |
|
9.1.0.0 |
No Specific Feature |
SVAPAR-161518 |
FS5000 |
Suggested
|
Configuration backup and support data collection may create files with invalid JSON encoding
(show details)
Symptom |
None |
Environment |
Systems that do not support FlashSystem grid |
Trigger |
Collecting a configuration backup or support data collection (snap) |
Workaround |
None |
|
9.1.0.0 |
Support Data Collection |
SVAPAR-164062 |
All |
Suggested
|
lsvdisk does not accept the filter values 'is_safeguarded_snapshot' and 'safeguarded_snapshot_count'.
(show details)
Symptom |
Configuration |
Environment |
None |
Trigger |
None |
Workaround |
None |
|
9.1.0.0 |
Command Line Interface |
SVAPAR-164064 |
All |
Suggested
|
The HTTP OPTIONS command is accepted on the service IP address, but should be blocked.
(show details)
Symptom |
Configuration |
Environment |
None |
Trigger |
None |
Workaround |
None |
|
9.1.0.0 |
Graphical User Interface |
SVAPAR-164777 |
All |
Suggested
|
Failure to collect snaps using Storage Insights without a data collector from systems running 8.7.2 or higher
(show details)
Symptom |
Configuration |
Environment |
Systems using Storage Insights without a data collector |
Trigger |
Collecting a snap |
Workaround |
Creating and uploading a snap from the system management interface. |
|
9.1.0.0 |
Reliability Availability Serviceability, Support Data Collection |
SVAPAR-166429 |
All |
Suggested
|
After migration from remote copy to policy-based replication, volume groups may be unable to be set to "production" replication mode after they were made "independent". This is due to the migrated volumes having inconsistent configuration data.
(show details)
Symptom |
None |
Environment |
The system needs to have volumes that were migrated from remote copy to policy based replication. |
Trigger |
Change the replication mode to independent and then attempt to change it back to production. |
Workaround |
None. |
|
9.1.0.0 |
Global Mirror, Policy-based Replication |
SVAPAR-168102 |
All |
Suggested
|
Service GUI multi-factor authentication as superuser may not work, if the authentication provider is using PKCE (Proof Key for Code Exchange).
(show details)
Symptom |
Configuration |
Environment |
Systems using multi-factor authentication. |
Trigger |
None. |
Workaround |
None. |
|
9.1.0.0 |
Graphical User Interface |
SVAPAR-168639 |
All |
Suggested
|
The Licensed Functions view in the GUI may show incomplete information for restricted users.
(show details)
Symptom |
Configuration |
Environment |
None. |
Trigger |
None. |
Workaround |
None. |
|
9.1.0.0 |
Graphical User Interface |
SVAPAR-170309 |
All |
Suggested
|
The IP quorum application might not connect to a system if the request to discover node IP addresses fails
(show details)
Symptom |
Error in Error Log |
Environment |
Systems using Policy-based High Availability |
Trigger |
IP quorum fails to discover node IP addresses |
Workaround |
Re-deploy the IP quorum application |
|
9.1.0.0 |
IP Quorum, Policy-based High availability |
SVAPAR-170498 |
All |
Suggested
|
A timing window may cause a single-node warmstart, if a volume is deleted at the same time that volume compressibility is being measured.
(show details)
Symptom |
Single Node Warmstart |
Environment |
NA |
Trigger |
Volume deletion |
Workaround |
NA |
|
9.1.0.0 |
Comprestimator |
SVAPAR-93445 |
FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500 |
Suggested
|
A single node warmstart may occur due to a very low-probability timing window related to NVMe drive management.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems with NVMe Drives |
Trigger |
None |
Workaround |
None |
|
9.1.0.0 |
No Specific Feature |