SVAPAR-140080 |
All |
HIPER
|
Tier 2 warmstarts ending with nodes in service state while processing a long list of expired snapshots.
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running code level 8.6.1.x, 8.6.2.x, 8.6.3.x and 8.7.0.0 that use either SGC or volume group snapshots. |
Trigger |
It is very unlikely that this can be triggered only by user activity (e.g. by running 'rmsnapshot' on a large number of snapshots). The most likely trigger is if the system is already affected by SVAPAR-140079, and the client runs 'rmsnapshot' on one or more snapshots. This will unblock the scheduler and prompt it to process (rmsnapshot) all the snapshots that expired since it became blocked. |
Workaround |
If the system is already affected by SVAPAR-140079, avoid running 'rmsnapshot' commands. The risk can be temporarily mitigated by suspending snapshot policies at a system level. ('chsystem -snapshotpolicysuspended yes') |
|
8.7.0.1 |
FlashCopy, Safeguarded Copy & Safeguarded Snapshots |
SVAPAR-131228 |
All |
Critical
|
A RAID array temporarily goes offline due to delays in fetching the encryption key when a node starts up.
(show details)
Symptom |
Offline Volumes |
Environment |
Systems using encryption configured to use encryption key servers |
Trigger |
Start up of a node |
Workaround |
Reduce the number of configured key servers to two. |
|
8.7.0.1 |
Distributed RAID, Encryption, RAID |
SVAPAR-135022 |
All |
Critical
|
When using Policy Based High Availability, a storage partition can become suspended due to a disagreement in the internal quorum race state between two systems, causing a loss of access to data.
(show details)
Symptom |
Loss of Access to Data |
Environment |
Any systems with Policy Based High Availability configured |
Trigger |
None |
Workaround |
None |
|
8.7.0.1 |
Policy-based Replication |
SVAPAR-136256 |
All |
Critical
|
Each ethernet port can only have a single management IP address. Attempting to add a second management IP to the same port may cause multiple node warmstarts and a loss of access to data.
(show details)
Symptom |
Loss of Access to Data |
Environment |
None |
Trigger |
Attempting to add a second management IP address to a port |
Workaround |
Do not attempt to add a second management IP address to a port |
|
8.7.0.1 |
Reliability Availability Serviceability |
SVAPAR-137485 |
FS5000 |
Critical
|
Reseating a FlashSystem 50xx node canister at 8.7.0.0 may cause the partner node to reboot, causing temporary loss of access to data.
(show details)
Symptom |
Loss of Access to Data |
Environment |
FS50xx running 8.7.0.0 |
Trigger |
Reseating a canister |
Workaround |
None |
|
8.7.0.1 |
Reliability Availability Serviceability |
SVAPAR-140079 |
All |
Critical
|
The internal scheduler is blocked after requesting more flashcopy bitmap memory. This will cause the creation of new snapshots and removal of expired snapshots to fail.
(show details)
Symptom |
None |
Environment |
Systems running code level 8.6.1.x, 8.6.2.x, 8.6.3.x and 8.7.0.0 and using Safeguarded Copy or volume group snapshots. |
Trigger |
When addsnapshot needs to increase IO group memory for the flashcopy feature. |
Workaround |
A manual 'rmsnapshot' command on an expired snapshot will unblock the scheduler. However it should be noted that if the system has been in this state for a long time, unblocking the scheduler may trigger SVAPAR-140080. |
|
8.7.0.1 |
FlashCopy, Safeguarded Copy & Safeguarded Snapshots |
SVAPAR-141098 |
All |
Critical
|
High peak latency causing access loss after recovering from SVAPAR-140079 and SVAPAR-140080.
(show details)
Symptom |
Loss of Access to Data |
Environment |
Any system was exposed to SVAPAR-140079 and SVAPAR-140080 and a recovery procedure was performed. |
Trigger |
Background deletion of a large number of expired snapshots. |
Workaround |
After recovery from SVAPAR-140079 and SVAPAR-140080, wait for all expired snapshots to be deleted before starting host IO. |
|
8.7.0.1 |
FlashCopy, Safeguarded Copy & Safeguarded Snapshots |
SVAPAR-141112 |
All |
Critical
|
When using policy-based high availability and volume group snapshots, it is possible for an I/O timeout condition to trigger node warmstarts. This can happen if a system is disconnected for an extended period, and is then brought back online after a large amount of host I/O to the HA volumes.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using policy-based high availability and volume-group snapshots (including safeguarded copy) |
Trigger |
Policy-based high availability systems being disconnected for an extended period |
Workaround |
None |
|
8.7.0.1 |
Policy-based Replication |
SVAPAR-141920 |
All |
Critical
|
Under specific scenarios, adding a snapshot to a volume group could trigger a cluster recovery causing brief loss of access to data.
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running volume group snapshots |
Trigger |
Starting a new snapshot for a volume group using the '-pool' parameter. Additionally, the volumes in the volume group need to have a vdisk copy ID equal to 1. |
Workaround |
Avoid using the '-pool' parameter when taking the snapshot, or add a new vdisk copy with '-autodelete' parameter to the volumes in the volume group that have a vdisk copy ID equal to 1. |
|
8.7.0.1 |
FlashCopy |
SVAPAR-142040 |
All |
Critical
|
A timing window related to logging of capacity warnings may cause multiple node warmstarts on a system with low free physical capacity on an FCM array.
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems with FCM drives |
Trigger |
Timing window combined with low physical free capacity |
Workaround |
Avoid allowing an FCM array to come close to running out of physical capacity |
|
8.7.0.1 |
Reliability Availability Serviceability |
SVAPAR-142045 |
All |
Critical
|
A system which was previously running pre-8.6.0 software, and is now using policy-based high availability, may experience multiple node warmstarts when a PBHA failover is requested by the user.
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using PBHA after upgrade from pre-8.6.0 software. |
Trigger |
User-requested PBHA failover |
Workaround |
Do not initiate a PBHA failover until the software has been upgraded |
|
8.7.0.1 |
Policy-based Replication |
SVAPAR-143480 |
All |
Critical
|
When using asynchronous policy based replication on low bandwidth links with snapshot clone/restore, an undetected data corruption may occur. This issue only affects 8.7.0.0.
(show details)
Symptom |
Data Integrity Loss |
Environment |
8.7.0.0 systems using policy based replication with snapshot clone/restore |
Trigger |
None |
Workaround |
None |
|
8.7.0.1 |
Policy-based Replication |
SVAPAR-111173 |
All |
High Importance
|
Loss of access when two drives experience slowness at the same time
(show details)
Symptom |
Loss of Access to Data |
Environment |
Can occur on any SAS based system, including SAS expansion enclosures. |
Trigger |
Heavily loaded SAS spinning drives |
Workaround |
Try and reduce the work load |
|
8.7.0.1 |
RAID |
SVAPAR-136427 |
All |
High Importance
|
When deleting multiple older snapshots versions, whilst simultaneously creating new snapshots, the system can run out of bitmap space resulting in a bad snapshot map, repeated asserts, and a loss of access.
(show details)
Symptom |
Loss of Access to Data |
Environment |
This can affect any system configured with FlashCopy |
Trigger |
Deletion of multiple mid-age snapshots and creating new snapshots. |
Workaround |
Avoid deleting multiple older snapshots versions, whilst simultaneously creating new snapshots |
|
8.7.0.1 |
FlashCopy |
SVAPAR-137512 |
All |
High Importance
|
A single-node warmstart may occur during a shrink operation on a thin-provisioned volume. This is caused by a timing window in the cache component.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems at 8.7.0.0 with thin-provisioned volumes |
Trigger |
Shrinking of thin-provisioned volume (possibly because of a FlashCopy mapping being started). |
Workaround |
None |
|
8.7.0.1 |
Cache |
SVAPAR-138214 |
All |
High Importance
|
When a volume group is assigned to an ownership group, creating a snapshot and populating a new volume group from the snapshot will cause a warmstart of the configuration node when 'lsvolumepopulation' is run.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Using a combination of volume groups, volume group snapshot policies and ownership groups. Veeam 12.1 with Spectrum Virtualize 8.5.1 and higher. |
Trigger |
Creating a snapshot of a volume that has ownership. Populating a new volume group from the snapshot as an ownership group user. Running 'lsvolumepopulation' as an ownership group user. |
Workaround |
Either use superuser credentials to perform the tasks, or remove the ownership group and associated user group for objects that will have these actions performed on them. |
|
8.7.0.1 |
FlashCopy |
SVAPAR-139247 |
All |
High Importance
|
Very heavy write workload to a thin-provisioned volume may cause a single-node warmstart, due to a low-probability deadlock condition.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems on 8.7.0 with thin-provisioned volumes |
Trigger |
Heavy write workload to a thin-provisioned volume |
Workaround |
None |
|
8.7.0.1 |
Thin Provisioning |
SVAPAR-139260 |
All |
High Importance
|
Heavy write workloads to thin-provisioned volumes may result in poor performance on thin-provisioned volumes, due to a lack of destage resource.
(show details)
Symptom |
Performance |
Environment |
Systems using thin-provisioned volumes in standard pools |
Trigger |
Heavy write workloads |
Workaround |
None |
|
8.7.0.1 |
Thin Provisioning |
SVAPAR-141684 |
All |
High Importance
|
Prevent drive firmware upgrade with both '-force' and '-all' parameters, to avoid multiple drives going offline due to lack of redundancy.
(show details)
Symptom |
Data Integrity Loss |
Environment |
All |
Trigger |
Upgrading drive firmware with '-force' and all at the same time |
Workaround |
If the force flag is required to upgrade drive firmware, each drive must be upgraded individually. |
|
8.7.0.1 |
Drives |
SVAPAR-144068 |
All |
High Importance
|
If a volume group snapshot is created at the same time as an existing snapshot is deleting, all nodes may warmstart, causing a loss of access to data. This can only happen if there is insufficient FlashCopy bitmap space for the new snapshot.
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using volume group snapshots |
Trigger |
Creating a new volume group snapshot while another is deleting. |
Workaround |
Increase the FlashCopy bitmap space using chiogrp, so that there is sufficient space for the new snapshot. |
|
8.7.0.1 |
Snapshots |
SVAPAR-135742 |
All |
Suggested
|
A temporary network issue may cause unexpected 1585 DNS connection errors after upgrading to 8.6.0.4, 8.6.3.0 or 8.7.0.0. This is due to a shorter DNS request timeout in these PTFs.
(show details)
Symptom |
Configuration |
Environment |
Systems with a DNS server configured. |
Trigger |
Temporary network issue causing a DNS request timeout. |
Workaround |
Check the network for any issues that may cause packet loss, which could lead to a timeout on DNS requests. 'traceroute' command from the cluster to the DNS server can help determine if there are routes that are slower than others. |
|
8.7.0.1 |
Reliability Availability Serviceability |
SVAPAR-138418 |
All |
Suggested
|
Snap collections triggered by Storage Insights over cloud callhome time out before they have completed
(show details)
Symptom |
None |
Environment |
Systems using Cloud Callhome |
Trigger |
Remotely collecting a snap via callhome |
Workaround |
Collect the snap locally and upload to IBM |
|
8.7.0.1 |
Reliability Availability Serviceability |
SVAPAR-138859 |
FS5000, FS5100, FS5200 |
Suggested
|
Collecting a Type 4 support package (Snap Type 4: Standard logs plus new statesaves) in the GUI can trigger an out of memory event causing the GUI process to be killed.
(show details)
Symptom |
None |
Environment |
FlashSystem 5xxx |
Trigger |
Triggering a Snap Type 4: Standard logs plus new statesaves, via the GUI. |
Workaround |
Prepare and trigger livedumps via the CLI then take an option 3 snap either via the GUI or CLI. In the failure scenario, the GUI will hang but the GUI process will respawn, the snap collection will complete successfully in the background. The snap file can then be copied with scp or via the GUI. |
|
8.7.0.1 |
Support Data Collection |
SVAPAR-140994 |
All |
Suggested
|
Expanding a volume via the GUI fails with CMMVC7019E because the volume size is not a multiple of 512 bytes.
(show details)
Symptom |
Configuration |
Environment |
None |
Trigger |
Attempting to expand a volume by a value that is not a multiple of 512 bytes |
Workaround |
Expand the volume via the CLI using the 'expandvdisksize' command, ensuring that you increase by 512 byte multiples |
|
8.7.0.1 |
Reliability Availability Serviceability |
SVAPAR-141019 |
All |
Suggested
|
The GUI crashed when a user group with roles 3SiteAdmin and remote users exist
(show details)
Symptom |
Configuration |
Environment |
Systems configured for 3Ssite replication |
Trigger |
Environments that have a user group with role 3SiteAdmin and a remote user exists |
Workaround |
Either remove the remote user, or if no 3 site replication is in use, remove the user group with role 3SiteAdmin |
|
8.7.0.1 |
3-Site using HyperSwap or Metro Mirror, Graphical User Interface |
SVAPAR-141467 |
All |
Suggested
|
SNMPv3 traps may not be processed properly by the SNMP server configured in the system.
(show details)
Symptom |
Configuration |
Environment |
Systems running SNMPv3 |
Trigger |
None |
Workaround |
Recreate the SNMP definition of the storage system at the SNMP server, this will make the process to work again. |
|
8.7.0.1 |
System Monitoring |
SVAPAR-141876 |
All |
Suggested
|
The GUI does not offer the option to create GM or GMCV relationships, even after remote_copy compatibility mode has been enabled.
(show details)
Symptom |
Configuration |
Environment |
Systems running code level 8.7.0.0. and compatibility mode enabled for remote_copy |
Trigger |
Attempting to create GM or GMCV relationships from the GUI. |
Workaround |
Use the CLI or Copy Services Manager (CSM) instead |
|
8.7.0.1 |
Global Mirror, Global Mirror With Change Volumes, Graphical User Interface |
SVAPAR-141937 |
All |
Suggested
|
In a Policy-based high availability configuration, when a SCSI Compare and Write command is sent to the non-Active Management System, and communication is lost between the systems while it is being processed, a node warmstart may occur.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Policy-based high availability |
Trigger |
None |
Workaround |
None |
|
8.7.0.1 |
Policy-based Replication |
SVAPAR-105861 |
SVC |
HIPER
|
A cluster recovery may occur when an attempt is made to create a mirrored snapshot with insufficient volume mirroring bitmap space in the IO group
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems using mirrored snapshots, including safeguarded snapshots |
Trigger |
Attempting to create a mirrored snapshot without sufficient volume mirroring bitmap space |
Workaround |
Adjust snapshot policy configuration to ensure that the maximum bitmap space is sufficient |
|
8.7.0.0 |
FlashCopy, Safeguarded Copy & Safeguarded Snapshots, Volume Mirroring |
SVAPAR-116592 |
All |
HIPER
|
If a V5000E or a Flashsystem 5000 is configured with multiple compressed IP partnerships, and one or more of the partnerships is with a non V5000E or FS5000, it may repeatedly warmstart due to a lack of compression resources.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
V5000E or Flashsystem 5000 configured with multiple compressed IP partnerships, and one or more of the partnerships is with a non V5000E or FS5000. Note a Flashsystem 5200 is not included in FS5000 here. |
Trigger |
Configuring a V5000E or Flashsystem 5000 with multiple compressed IP partnerships, and one or more of the partnerships is with a non V5000E or FS5000. |
Workaround |
Turn off compression for the partnership with the non V5000E or FS5000 system. |
|
8.7.0.0 |
IP Replication |
SVAPAR-117738 |
All |
HIPER
|
The configuration node may go offline with node error 565, due to a full /tmp partition on the boot drive.
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running 8.6.2 |
Trigger |
None |
Workaround |
Reboot the node to bring it online. |
|
8.7.0.0 |
Reliability Availability Serviceability |
SVAPAR-130438 |
All |
HIPER
|
Upgrading a system to 8.6.2 or higher with a single portset assigned to an IP replication partnership may cause all nodes to warmstart when making a change to the partnership.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems with IP replication partnerships. |
Trigger |
Upgrading a system with a single portset assigned to an IP replication partnership and making a change to the partnership. |
Workaround |
None |
|
8.7.0.0 |
IP Replication |
SVAPAR-94179 |
FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500 |
HIPER
|
Faulty hardware within or connected to the CPU can result in a reboot on the affected node. However it is possible for this to sometimes result in a reboot on the partner node
(show details)
Symptom |
Loss of Access to Data |
Environment |
All Flashsystems and V7000 Gen3, but not SVC |
Trigger |
Node hardware fault |
Workaround |
None |
|
8.7.0.0 |
Reliability Availability Serviceability |
HU02585 |
All |
Critical
|
An unstable connection between the Storage Virtualize system and an external virtualized storage system can sometimes result in a cluster recovery occurring
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
None |
Trigger |
An unstable connection between the Storage Virtualize system and an external virtualized storage system can cause objects to be discovered out of order, resulting in a cluster recovery |
Workaround |
Stabilise the SAN fabric by replacing any failing hardware, such as a faulty SFP |
|
8.7.0.0 |
Backend Storage |
SVAPAR-100127 |
All |
Critical
|
The Service Assistant GUI Node rescue option incorrectly performs the node rescue on the local node instead of the node selected in the GUI.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Any cluster running on 8.5.0.0 code and above. |
Trigger |
This problem can happen if the user is on the Service Assistant GUI of a node but selects another node for node rescue. The Node rescue will perform on the local node they are on and not the node selected |
Workaround |
Use the CLI 'satask rescuenode -force <node-panel-id>' command to select the correct node to perform the node rescue on, or log onto the Service GUI of the node that is requiring a node rescue if it is accessible, that way the node in need will be the local node |
|
8.7.0.0 |
Graphical User Interface |
SVAPAR-100564 |
All |
Critical
|
On code level 8.6.0.0, multiple node warmstarts will occur if a user attempts to remove the site ID from a host that has Hyperswap volumes mapped to it.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Hyperswap cluster on 8.6.0.0 with Hyperswap volumes mapped to one or more hosts |
Trigger |
Attempting to remove the site ID from a host that has Hyperswap volumes mapped to it |
Workaround |
Convert all the mapped Hyperswap volumes to basic volumes, then remove the site ID |
|
8.7.0.0 |
HyperSwap |
SVAPAR-100871 |
FS5000, FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500, SVC |
Critical
|
Removing an NVMe host followed by running the 'lsnvmefabric' command causes a recurring single node warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Environments that have NVMe hosts configured or enhanced callhome enabled, or integration with external orchestration components such as Redhat Openshift |
Trigger |
Running the 'lsnvmefabric' via CLI. This can either be manually run, or run via Callhome or external orchestration. |
Workaround |
Running the 'lsnvmefabric' command |
|
8.7.0.0 |
NVMe |
SVAPAR-104533 |
All |
Critical
|
Systems that encounter multiple node asserts, followed by a system T3 recovery, may experience errors repairing Data Reduction Pools
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using Data Reduction Pools |
Trigger |
Multiple node asserts followed by a system T3 recovery |
Workaround |
None |
|
8.7.0.0 |
Data Reduction Pools |
SVAPAR-105430 |
All |
Critical
|
When hardware compression is suspended mid IO to a DRP compressed volume, it may cause the IO to hang until an internal timeout is hit and a node warmstarts.
(show details)
Symptom |
Single Node Warmstart |
Environment |
GEN3 or later hardware with DRP compressed volumes. |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
Compression, Data Reduction Pools |
SVAPAR-107270 |
All |
Critical
|
If an upgrade from a level below 8.6.x, to 8.6.0 or 8.6.1 commits, whilst FlashCopy is preparing to start a map, a bad state is introduced that prevents the FlashCopy maps from starting.
(show details)
Symptom |
Loss of Redundancy |
Environment |
Systems using Global Mirror with Change Volumes or Policy-based Replication |
Trigger |
Upgrade commit |
Workaround |
Stop replication, or the partnership before starting the upgrade |
|
8.7.0.0 |
Global Mirror With Change Volumes, Policy-based Replication |
SVAPAR-107547 |
All |
Critical
|
If there are more than 64 logins to a single Fibre Channel port, and a switch zoning change is made, a single node warmstart may occur.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems with Fibre Channel adapters |
Trigger |
Switch zoning change with more than 64 logins to a single storage system port. |
Workaround |
Reduce the number of logins to a single storage system port |
|
8.7.0.0 |
Reliability Availability Serviceability |
SVAPAR-107734 |
All |
Critical
|
When issuing IO to an incremental fcmap volume that is in a stopped state, but has recently been expanded, and it also has a partner fcmap, may cause the nodes to go into a restart.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems configured with Incremental flashcopy and reverse fcmap |
Trigger |
Resizing volumes in an incremental partnered fcmaps |
Workaround |
Ensure that both incremental partnered fcmaps are deleted, and then re-create a new pair if you need to resize the volumes. |
|
8.7.0.0 |
FlashCopy |
SVAPAR-110735 |
All |
Critical
|
Additional policing has been introduced to ensure that FlashCopy target volumes are not used with policy-based replication. Commands 'chvolumegroup -replicationpolicy' will fail if any volume in the group is the target of a FlashCopy map. 'chvdisk -volumegroup' will fail if the volume is the target of a FlashCopy map, and the volume group has a replication policy.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
None |
Trigger |
Running either the 'chvolumegroup -replicationpolicy' or 'chvdisk -volumegroup' commands. |
Workaround |
None |
|
8.7.0.0 |
FlashCopy, Policy-based Replication |
SVAPAR-111257 |
All |
Critical
|
If many drive firmware upgrades are performed in quick succession, multiple nodes may go offline with node error 565 due to a full boot drive.
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems running 8.5.5, 8.6.0 or 8.6.1 software, with SAS drives. NVMe drives are not affected by this issue. |
Trigger |
Performing many individual drive firmware upgrades in quick succession. This issue does not occur if a drive firmware upgrade is applied to all drives at once. |
Workaround |
Upgrade all drives at once, instead of upgrading individual drives one at a time. |
|
8.7.0.0 |
Drives |
SVAPAR-111705 |
All |
Critical
|
If a Volume Group Snapshot fails and the system has 'snapshotpreserveparent' set to 'yes', this may trigger multiple node warmstarts.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems using Volume Group Snapshot and have 'snapshotpreserveparent' set to 'yes'. |
Trigger |
An addsnapshot failure. |
Workaround |
None |
|
8.7.0.0 |
FlashCopy |
SVAPAR-111994 |
All |
Critical
|
Certain writes to deduplicated and compressed DRP vdisks may return a mismatch, leading to a DRP pool going offline.
(show details)
Symptom |
Loss of Redundancy |
Environment |
DRP with deduplification and compression. |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
Compression, Data Reduction Pools, Deduplication |
SVAPAR-112007 |
All |
Critical
|
Running the 'chsystemlimits' command with no parameters can cause multiple node warmstarts.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
None |
Trigger |
Running the 'chsystemlimits' command without any parameters |
Workaround |
Avoid running the 'chsystemlimits' command without any parameters. |
|
8.7.0.0 |
Command Line Interface |
SVAPAR-112107 |
FS9500, SVC |
Critical
|
There is an issue that affects PSU firmware upgrades in FS9500 and SV3 systems that can cause an outage. This happens when one PSU fails to download the firmware and another PSU starts to download the firmware. It is a very rare timing window that can be triggered if two PSUs are reseated close in time during the firmware upgrade process.
(show details)
Symptom |
Loss of Access to Data |
Environment |
FS9500 or SV3 |
Trigger |
Two PSUs are reseated close in time during the firmware upgrade process |
Workaround |
None |
|
8.7.0.0 |
System Update |
SVAPAR-112707 |
SVC |
Critical
|
Marking error 3015 as fixed on a SVC cluster containing SV3 nodes may cause a loss of access to data. For more details refer to this Flash
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems containing 214x-SV3 nodes that have been downgraded from 8.6 to 8.5 |
Trigger |
Marking 3015 error as fixed |
Workaround |
Do not attempt to repair the 3015 error, contact IBM support |
|
8.7.0.0 |
Reliability Availability Serviceability |
SVAPAR-112939 |
All |
Critical
|
A loss of disk access on one pool may cause IO to hang on a different pool due to a cache messaging hang.
(show details)
Symptom |
Loss of Access to Data |
Environment |
System with multiple storage pools. |
Trigger |
Loss of disk access to one pool. |
Workaround |
None |
|
8.7.0.0 |
Cache |
SVAPAR-115505 |
All |
Critical
|
Expanding a volume in a Flashcopy map and then creating a dependent incremental forward and reverse Flashcopy map may cause a dual node warmstart when the incremental map is started.
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using incremental reverse Flashcopy mappings. |
Trigger |
Expanding a volume in a Flashcopy map and then creating and starting a dependent incremental forward and reverse Flashcopy map. |
Workaround |
None |
|
8.7.0.0 |
FlashCopy |
SVAPAR-120391 |
All |
Critical
|
Removing an incremental Flashcopy mapping from a consistency group, after there was a previous error when starting the Flashcopy consistency group that caused a node warmstart, may trigger additional node asserts.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems using incremental copy consistency groups. |
Trigger |
Removing an incremental Flashcopy mapping from a consistency group after, there was a previous error when starting the Flashcopy consistency group that caused a node warmstart. |
Workaround |
None |
|
8.7.0.0 |
FlashCopy |
SVAPAR-120397 |
All |
Critical
|
A node may not shutdown cleanly on loss of power if it contains 25Gb Ethernet adapters, necessitating a system recovery.
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems with 25Gb Ethernet adapters. |
Trigger |
Loss of power to the system. |
Workaround |
None |
|
8.7.0.0 |
Reliability Availability Serviceability |
SVAPAR-123874 |
All |
Critical
|
There is a timing window when using async-PBR or RC GMCV, with Volume Group snapshots, which results in the new snapshot VDisk mistakenly being taken offline, forcing the production volume offline for a brief period.
(show details)
Symptom |
Offline Volumes |
Environment |
Policy-based Replication or Global Mirror with Change Volumes |
Trigger |
When a snapshot begins, and at the same moment the change volume mappings are being prepared and about to trigger. |
Workaround |
None |
|
8.7.0.0 |
Global Mirror With Change Volumes, Policy-based Replication |
SVAPAR-123945 |
All |
Critical
|
If a system SSL certificate is installed with the extension CA True it may trigger multiple node warmstarts.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
None |
Trigger |
Installing a system SSL certificate with the extension CA True |
Workaround |
None |
|
8.7.0.0 |
Encryption |
SVAPAR-125416 |
All |
Critical
|
If the vdisk with ID 0 is deleted and then recreated, and is added to a volume group with an HA replication policy, its internal state may become invalid. If a node warmstart or upgrade occurs in this state, this may trigger multiple node warmstarts and loss of access.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems using Policy-Based High Availability |
Trigger |
Deletion then recreation of vdisk ID 0, followed by a node warmstart or upgrade. |
Workaround |
Avoid adding vdisk ID 0 to a volume group with an HA replication policy. |
|
8.7.0.0 |
Policy-based Replication |
SVAPAR-126737 |
All |
Critical
|
If a user that does not have SecurityAdmin role runs the command 'rmmdiskgrp -force' on a pool with mirrored VDisks, a T2 recovery may occur.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems with mirrored VDisks on code level 8.4.2 and higher |
Trigger |
Running the command 'rmmdiskgrp -force' as non-security admin with mirrored vdisks configured |
Workaround |
Run the 'rmmdiskgrp -force' command as a user with SecurityAdmin role, e.g. superuser |
|
8.7.0.0 |
|
SVAPAR-126767 |
All |
Critical
|
Upgrading to 8.6.0 when iSER clustering is configured, may cause multiple node warmstarts to occur, if node canisters have been swapped between slots since the system was manufactured.
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using iSER clustering, where node canisters have been swapped. |
Trigger |
Upgrading to 8.6.0.0 when iSER clustering is configured |
Workaround |
None |
|
8.7.0.0 |
iSCSI |
SVAPAR-127833 |
All |
Critical
|
Temperature warning is reported against the incorrect Secondary Expander Module (SEM)
(show details)
Symptom |
Loss of Access to Data |
Environment |
High density SAS enclosure attached (92F/92G |
Trigger |
Temperature alerts being logged against a Secondary Expander Module |
Workaround |
None |
|
8.7.0.0 |
Reliability Availability Serviceability |
SVAPAR-127836 |
All |
Critical
|
Running some Safeguarded Copy commands can cause a cluster recovery in some platforms.
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using either Safeguarded copy 1.0 or 2.0 |
Trigger |
Running SGC 1.0 commands on platforms that only support SGC 2.0 or vice versa can cause a cluster recovery. |
Workaround |
Do not run non-supported SGC related commands |
|
8.7.0.0 |
Safeguarded Copy & Safeguarded Snapshots |
SVAPAR-128052 |
All |
Critical
|
A node assert may occur if a host sends a login request to a node when the host is being removed from the cluster with the '-force' parameter.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Any host that uses the NVME protocol |
Trigger |
Removing a node by using the '-force' parameter |
Workaround |
Do not use the '-force' parameter |
|
8.7.0.0 |
Hosts, NVMe |
SVAPAR-128401 |
FS5000 |
Critical
|
Upgrade to 8.6.3 may cause loss of access to iSCSI hosts, on FlashSystem 5015 and FlashSystem 5035 systems with a 4-port 10Gb ethernet adapter.
(show details)
Symptom |
Loss of Access to Data |
Environment |
FlashSystem 5015 and FlashSystem 5035 systems with a 4-port 10Gb ethernet adapter |
Trigger |
Upgrading to 8.6.3 |
Workaround |
None |
|
8.7.0.0 |
Reliability Availability Serviceability |
SVAPAR-128626 |
All |
Critical
|
A node may warmstart or fail to start FlashCopy maps, in volume groups that contain Remote Copy primary and secondary volumes, or both copies of a Hyperswap volume.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Any systems configured with volume groups containing both Remote Copy primary and secondary volumes, or both copies of a Hyperswap volume. |
Trigger |
Starting a FlashCopy map in an affected volume group |
Workaround |
None |
|
8.7.0.0 |
FlashCopy, Global Mirror, Global Mirror With Change Volumes, HyperSwap, Metro Mirror |
SVAPAR-128912 |
All |
Critical
|
A T2 recovery may occur when attempting to take a snapshot from a volume group that contains volumes from multiple I/O groups, and one of the I/O groups is offline.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems with volume groups containing volumes from multiple I/O groups |
Trigger |
Taking a snapshot while one I/O group is offline. This includes snapshots taken automatically based on a scheduled policy |
Workaround |
Suspend the snapshot policy |
|
8.7.0.0 |
FlashCopy, Safeguarded Copy & Safeguarded Snapshots |
SVAPAR-128913 |
All |
Critical
|
Multiple node asserts after a VDisk copy in a data reduction pool was removed while an IO group is offline and a T2 recovery occurred
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems with multiple IO groups and data reduction pools |
Trigger |
Removal of a vdisk copy in a data reduction pool while an IO group is offline, followed by a T2 recovery |
Workaround |
None |
|
8.7.0.0 |
Data Reduction Pools |
SVAPAR-129298 |
All |
Critical
|
Manage disk group went offline during queueing of fibre rings on the overflow list causing the node to assert.
(show details)
Symptom |
Single Node Warmstart |
Environment |
None |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
RAID |
SVAPAR-130553 |
All |
Critical
|
Converting a 3-Site AuxFar volume to HyperSwap results in multiple node asserts
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
3-Site system with HyperSwap in the AuxFar site configured |
Trigger |
Converting a 3-Site AuxFar volume to HyperSwap |
Workaround |
Avoid converting 3-Site AuxFar volumes to HyperSwap |
|
8.7.0.0 |
3-Site using HyperSwap or Metro Mirror, HyperSwap |
SVAPAR-131259 |
All |
Critical
|
Removal of the replication policy after the volume group was set to be independent exposed an issue that resulted in the FlashCopy internal state becoming incorrect, this meant subsequent FlashCopy actions failed incorrectly.
(show details)
Symptom |
None |
Environment |
Systems configured with Volume Groups, Policy-based replication and FlashCopy. |
Trigger |
A volume group used for Policy-based Replication has its target side made independent. Then the volume group has its replication policy removed. After these actions, if any volumes in the volume group are used for FlashCopy, it will hit the error. |
Workaround |
None |
|
8.7.0.0 |
FlashCopy, Policy-based Replication |
SVAPAR-132027 |
All |
Critical
|
An incorrect 'acknowledge' status for an initiator SCSI command is sent from the SCSI target side when no sense data was actually transferred. This may cause a node to warmstart.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Environments that have 3 systems partnered with one of the systems running a code level older than 8.3.1.0. |
Trigger |
3-way partnership and one of the systems is running a code level older than 8.3.1.0. |
Workaround |
Ensure that all systems in the partnership are running code level higher then 8.3.1.x |
|
8.7.0.0 |
|
SVAPAR-133392 |
All |
Critical
|
In rare situations involving multiple concurrent snapshot restore operations, an undetected data corruption may occur.
(show details)
Symptom |
Data Integrity Loss |
Environment |
Configurations using snapshot restore feature on 8.6.2 or higher. |
Trigger |
Multiple concurrent snapshot restore operations. |
Workaround |
None |
|
8.7.0.0 |
|
SVAPAR-133442 |
All |
Critical
|
When using asynchronous policy based replication in DR test mode, if the DR volume group is put into production use (the volume group is made independent), an undetected data corruption may occur.
(show details)
Symptom |
Data Integrity Loss |
Environment |
Configurations using asynchronous policy based replication in DR test mode. |
Trigger |
DR volume group is put into production use (the volume group is made independent) |
Workaround |
None |
|
8.7.0.0 |
Policy-based Replication |
SVAPAR-98184 |
All |
Critical
|
When a Volume Group Snapshot clone is added to a replication policy before the clone is complete, the system may repeatedly warmstart when the Policy-based Replication volume group is changed to independent access
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using Volume Group Snapshot Clones and Policy-Based Replication |
Trigger |
Changing an affected Policy-Based Replication Volume Group to independent access |
Workaround |
Wait for the clone to complete before adding the volumes to a replication policy |
|
8.7.0.0 |
FlashCopy, Policy-based Replication |
SVAPAR-98612 |
All |
Critical
|
Creating a volume group snapshot with an invalid I/O group value may trigger multiple node warmstarts
(show details)
Symptom |
Loss of Access to Data |
Environment |
Systems using volume group snapshots |
Trigger |
Using an invalid I/O group value when creating a volume group snapshot |
Workaround |
Make sure that you specify the correct I/O group value |
|
8.7.0.0 |
FlashCopy |
HU02159 |
All |
High Importance
|
A rare issue caused by unexpected I/O in the upper cache can cause a node to warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Offline target disk |
Trigger |
Target disk must be offline for this situation to occur. |
Workaround |
None |
|
8.7.0.0 |
Cache |
SVAPAR-100162 |
All |
High Importance
|
Some host vendors such as Windows have recently started to use 'mode select page 7'. IBM Storage does not support this mode. If the storage received this mode level it would cause a warmstart to occur
(show details)
Symptom |
Single Node Warmstart |
Environment |
Any cluster running on 8.4.0.0 or higher |
Trigger |
If a host uses 'mode select page 7' |
Workaround |
None |
|
8.7.0.0 |
Hosts |
SVAPAR-100977 |
All |
High Importance
|
When a zone containing NVMe devices is enabled, a node warmstart might occur.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Any system running 8.5.0.5 |
Trigger |
Enabling a zone with a host that has approximately 1,000 vdisks mapped |
Workaround |
Make sure that the created zone does not contain NVMe devices |
|
8.7.0.0 |
NVMe |
SVAPAR-102573 |
All |
High Importance
|
On systems using Policy-Based Replication and Volume Group Snapshots, some CPU cores may have high utilization due to an issue with the snapshot cleaning algorithm. This can impact performance for replication and host I/O
(show details)
Symptom |
Performance |
Environment |
Systems using Policy-Based Replication and Volume Group Snapshots |
Trigger |
Snapshot mappings with low cleaning workload |
Workaround |
None |
|
8.7.0.0 |
Policy-based Replication |
SVAPAR-104159 |
All |
High Importance
|
Nodes configured with 32GB or less of RAM, and specific 25Gb ethernet adapters, under some circumstances may run out of memory. This can cause a single node warmstart.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Nodes running 8.6.0.0 with 32GB or less of RAM, and specific 25Gb ethernet adapters |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
Reliability Availability Serviceability |
SVAPAR-104250 |
All |
High Importance
|
There is an issue whereby NVMe CaW (Compare and Write) commands can incorrectly go into an invalid state, thereby causing the node to assert to clear the bad condition
(show details)
Symptom |
Single Node Warmstart |
Environment |
Any host running NVMe |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
Hosts, NVMe |
SVAPAR-105727 |
All |
High Importance
|
An upgrade within the 8.5.0 release stream from 8.5.0.5 or below, to 8.5.0.6 or above, can cause an assert of down-level nodes during the upgrade, if volume mirroring is heavily utilised
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Any system running volume mirroring with either a large number of volumes or high syncrate |
Trigger |
Upgrading from 8.5.0.5 or below to 8.5.0.6 or above with heavy volume mirroring workload |
Workaround |
Disable mirroring, or reduce syncrate to a low value during the upgrade process |
|
8.7.0.0 |
Volume Mirroring |
SVAPAR-106874 |
FS5200, FS7200, FS7300, FS9100, FS9200, FS9500, SVC |
High Importance
|
A timing window may cause a single node warmstart, while recording debug information about a replicated host write. This can only happen on a system using Policy Based Replication.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running PBR and code 8.6.0.0 or 8.6.0.1 |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
Policy-based Replication |
SVAPAR-107815 |
All |
High Importance
|
There is an issue between 3-Site, whilst adding snapshots on the auxfar site, that causes the node to warmstart
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Any system that is running 8.6.0.0 with 3-Site configured |
Trigger |
adding snapshots on the auxfar site when 3-Site is configured |
Workaround |
None |
|
8.7.0.0 |
3-Site using HyperSwap or Metro Mirror |
SVAPAR-107866 & SVAPAR-110742 |
All |
High Importance
|
A System is unable to send email to email server because the password contains a hash '#' character.
(show details)
Symptom |
None |
Environment |
Configured email server requires username and password for authentication. |
Trigger |
Password contains hash characters |
Workaround |
Update the user password to remove any hash characters. |
|
8.7.0.0 |
|
SVAPAR-108715 |
All |
High Importance
|
The Service Assistant GUI on 8.5.0.0 and above incorrectly performs actions on the local node instead of the node selected in the GUI.
(show details)
Symptom |
Configuration |
Environment |
None |
Trigger |
None |
Workaround |
The work around is to perform Service Assistant GUI actions using the Service Assistant CLI instead. Alternatively, In the Service Assistant GUI, select a node that you are not on to perform a Service Assistant action, then when you submit the command, the action will be performed on the local node instead. |
|
8.7.0.0 |
Graphical User Interface |
SVAPAR-108831 |
FS9500, SVC |
High Importance
|
FS9500 and SV3 nodes may not boot with the minimum configuration consisting of at least 2 DIMMS.
(show details)
Symptom |
Single Node Warmstart |
Environment |
FS9500 or SV3 nodes with 2 DIMMS in the node. |
Trigger |
FS9500 or SV3 nodes that are configure with 2 DIMMS. |
Workaround |
Ensure that the node has a minimum of 8 DIMMS installed |
|
8.7.0.0 |
Reliability Availability Serviceability |
SVAPAR-109385 |
All |
High Importance
|
When one node has a hardware fault involving a faulty PCI switch, the partner node can repeatedly assert until it enters a 564 status resulting in an outage
(show details)
Symptom |
Loss of Access to Data |
Environment |
Any Flashsystem |
Trigger |
This can occur during upgrade, however this aspect is to be confirmed |
Workaround |
Remove the failing node and then reboot the asserting node |
|
8.7.0.0 |
|
SVAPAR-110426 |
All |
High Importance
|
When a security admin other than superuser runs security patch related commands 'lspatch' and 'lssystempatches' this can cause a node to warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
None |
Trigger |
Any user that is not Superuser running a security patch command |
Workaround |
Use superuser for security patching commands. |
|
8.7.0.0 |
Security |
SVAPAR-110743 |
All |
High Importance
|
Email becoming stuck in the mail queue caused a delay in the 'upgrade commit was finished' message being sent, therefore causing 3 out of 4 nodes to warmstart, and then rejoin the cluster automatically within less than three minutes.
(show details)
Symptom |
Single Node Warmstart |
Environment |
None |
Trigger |
Email stuck in the mail queue causing a delay in the 'upgrade commit was finished' message being sent. |
Workaround |
None |
|
8.7.0.0 |
System Update |
SVAPAR-110765 |
All |
High Importance
|
In a 3-Site configuration, the Config node can be lost if the 'stopfcmap or 'stopfcconsistgrp ' commands are run with the '-force' parameter
(show details)
Symptom |
Single Node Warmstart |
Environment |
Any cluster with 3-Site configuration |
Trigger |
Running 'stopfcmap' or 'stopfcconsistgrp' with '-force' on the flash copy maps with vdisk in 3-site configuration. |
Workaround |
Do not use the '-force' parameter when running either the 'stopfcmap' or 'stopfcconsistgrp' commands. |
|
8.7.0.0 |
3-Site using HyperSwap or Metro Mirror |
SVAPAR-110819 & SVAPAR-113122 |
All |
High Importance
|
A single-node warmstart may occur when a Fibre Channel port is disconnected from one fabric, and added to another. This is caused by a timing window in the FDMI discovery process.
(show details)
Symptom |
Single Node Warmstart |
Environment |
There are more than one fc switches connected to same Flash system cluster |
Trigger |
Disconnected a Fibre Channel port from one switch, and connecting it to another switch. |
Workaround |
None |
|
8.7.0.0 |
Fibre Channel |
SVAPAR-111812 |
All |
High Importance
|
Systems with 8.6.0 or later software may fail to complete lsvdisk commands, if a single SSH session runs multiple lsvdisk commands piped to each other. This can lead to failed login attempts for the GUI and CLI, and is is more likely to occur if the system has more than 400 volumes.
(show details)
Symptom |
Configuration |
Environment |
Systems with 8.6.0 or later software. |
Trigger |
Unusual use of nested svcinfo commands on the CLI. |
Workaround |
Avoid nested svcinfo commands. |
|
8.7.0.0 |
Command Line Interface |
SVAPAR-111996 |
FS9500, SVC |
High Importance
|
After upgrading to a level which contains new battery firmware, the battery may be offline after the upgrade.
(show details)
Symptom |
Loss of Redundancy |
Environment |
FS9500 or SV3 only |
Trigger |
Upgrading to 8.5.0.8+ or 8.6.0.0+ |
Workaround |
We have a utility to allow us to fix this in the field. |
|
8.7.0.0 |
Reliability Availability Serviceability, System Update |
SVAPAR-112119 |
All |
High Importance
|
Volumes can go offline due to out of space issues. This can cause the node to warmstart.
(show details)
Symptom |
Single Node Warmstart |
Environment |
None |
Trigger |
Out of space condition |
Workaround |
None |
|
8.7.0.0 |
|
SVAPAR-112203 |
All |
High Importance
|
A node warmstart may occur when removing a volume from a volume group which uses policy-based Replication.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems with policy-based Replication |
Trigger |
Removing a volume from a volume group which uses policy-based Replication. |
Workaround |
None |
|
8.7.0.0 |
Policy-based Replication |
SVAPAR-112525 |
All |
High Importance
|
A node assert can occur due to a resource allocation issue in a small timing window when using Remote Copy
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Hyperswap, Metro Mirror, Global Mirror, or Global Mirror with Change Volumes |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
Global Mirror, Global Mirror With Change Volumes, HyperSwap, Metro Mirror |
SVAPAR-112856 |
All |
High Importance
|
Conversion of Hyperswap volumes to 3 site consistency groups will increase write response time of the Hyperswap volumes.
(show details)
Symptom |
Performance |
Environment |
Any system running Hyperswap and 3-Site |
Trigger |
Conversion of Hyperswap to 3 site consistency groups |
Workaround |
Manually increase rsize of Hyperswap change volumes before conversion to 3 site consistency groups |
|
8.7.0.0 |
3-Site using HyperSwap or Metro Mirror, HyperSwap |
SVAPAR-115021 |
All |
High Importance
|
Software validation checks can trigger a T2 recovery when attempting to move a Hyperswap vdisk into and out of the nocachingiogrp state.
(show details)
Symptom |
Loss of Access to Data |
Environment |
Any system that is configured for hyperswap |
Trigger |
Invoking 'movevdisk' command with the '-nocachingiogrp' flag in a Hyperswap environment |
Workaround |
None |
|
8.7.0.0 |
HyperSwap |
SVAPAR-115520 |
All |
High Importance
|
An unexpected sequence of NVMe host IO commands may trigger a node warmstart.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems with NVMe hosts. |
Trigger |
An unexpected sequence of NVMe host IO commands |
Workaround |
None |
|
8.7.0.0 |
Hosts, NVMe |
SVAPAR-117457 |
All |
High Importance
|
A hung condition in Remote Receive IOs (RRI) for volume groups can lead to warmstarts on multiple nodes.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Any system that uses Policy-based Replication |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
Policy-based Replication |
SVAPAR-117768 |
All |
High Importance
|
Cloud Callhome may stop working without logging an error
(show details)
Symptom |
Configuration |
Environment |
8.6.0 or higher Systems sending data to Storage Insights without using the data collector are most likely to hit this issue |
Trigger |
None |
Workaround |
Cloud callhome can be disabled then re-enabled to restart the callhome if it has failed. |
|
8.7.0.0 |
Call Home |
SVAPAR-119799 |
FS9500, SVC |
High Importance
|
Inter-node resource queuing on SV3 I/O groups, causes high write response time.
(show details)
Symptom |
Performance |
Environment |
This can occur on environments that use clustering over Fibre Channel, or the SAN is not optimal, or have geographically dispersed sites for Enhanced Stretched cluster or Hyperswap |
Trigger |
High intra-cluster round trip time. |
Workaround |
None |
|
8.7.0.0 |
Performance |
SVAPAR-120599 |
All |
High Importance
|
On systems handling a large number of concurrent host I/O requests, a timing window in memory allocation may cause a single node warmstart.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Systems running 8.6.2.0 |
Trigger |
Very high I/O workload |
Workaround |
None |
|
8.7.0.0 |
Hosts |
SVAPAR-120616 |
All |
High Importance
|
After mapping a volume to an NVMe host, a customer is unable to map the same vdisk to a second NVMe host using the GUI, however it is possible using CLI.
(show details)
Symptom |
None |
Environment |
Any system where the same vdisks are mapped to different NVMe hosts via GUI can hit this issue. |
Trigger |
If the same vdisk is mapped to different NVMe hosts via GUI. |
Workaround |
Use the CLI |
|
8.7.0.0 |
Hosts |
SVAPAR-120630 |
All |
High Importance
|
An MDisk may go offline due to I/O timeouts caused by an imbalanced workload distribution towards the resources in DRP, whilst FlashCopy is running at a high copy rate within DRP, and the target volume is dedup.
(show details)
Symptom |
Offline Volumes |
Environment |
Any system running FlashCopy, with a deduplicated target volume in DRP. |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
Data Reduction Pools |
SVAPAR-120631 |
All |
High Importance
|
When a user deletes a vdisk, and if 'chfcmap' is run afterwards against the same vdisk ID, a system recovery may occur.
(show details)
Symptom |
Loss of Access to Data |
Environment |
Any system configured with FlashCopy |
Trigger |
Running the 'chfcmap' command against a deleting vdisk. |
Workaround |
Do not run 'chfcmap' against a deleting vdisk ID. |
|
8.7.0.0 |
FlashCopy |
SVAPAR-127063 |
All |
High Importance
|
Degraded Remote Copy performance on systems with multiple IO groups running 8.5.0.11 or 8.6.0.3 after a node restarts
(show details)
Symptom |
Performance |
Environment |
Systems with multiple IO groups using Remote Copy |
Trigger |
Restarting a node |
Workaround |
Warmstart any node that is affected by the issue |
|
8.7.0.0 |
Global Mirror, Global Mirror With Change Volumes, HyperSwap, Metro Mirror, Performance |
SVAPAR-127825 |
All |
High Importance
|
Due to an issue with the Fibre Channel adapter firmware the node may warmstart
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems that have QLogic Fibre Channel adapter installed |
Trigger |
None |
Workaround |
Upgrade to firmware version 9.08.42+ |
|
8.7.0.0 |
Fibre Channel |
SVAPAR-127841 |
All |
High Importance
|
A slow I/O resource leak may occur when using FlashCopy, and the system is under high workload. This may cause a node warmstart to occur
(show details)
Symptom |
Single Node Warmstart |
Environment |
Any system configured with FlashCopy |
Trigger |
Many FlashCopy activities occurring when the system is experiencing high workloads |
Workaround |
None |
|
8.7.0.0 |
FlashCopy |
SVAPAR-127845 |
All |
High Importance
|
Attempting to create a second I/O group, in the two `Caching I/O Group` dropdowns on the `Define Volume Properties` modal of `Create Volumes` results in error `CMMVC8709E the iogroups of cache memory storage are not in the same site as the storage groups`.
(show details)
Symptom |
Configuration |
Environment |
None |
Trigger |
Attempting to create a second I/O group, in the two `Caching I/O Group` dropdowns on the `Define Volume Properties` modal of `Create Volumes` |
Workaround |
None |
|
8.7.0.0 |
GUI Fix Procedure, Graphical User Interface |
SVAPAR-127869 |
All |
High Importance
|
Multiple node warmstarts may occur, due to a rarely seen timing window, when quorum disk I/O is submitted but there is no backend mdisk Logical Unit association that has been discovered by the agent for that quorum disk.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
None |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
Quorum |
SVAPAR-127871 |
All |
High Importance
|
When performing a manual upgrade of the AUX cluster from 8.1.1.2 to 8.2.1.12, 'lsupdate' incorrectly reports that the code level is 7.7.1.5
(show details)
Symptom |
Configuration |
Environment |
None |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
System Update |
SVAPAR-128914 |
All |
High Importance
|
A CMMVC9859E error will occur when trying to use 'addvolumecopy' to create Hyperswap volume from a VDisk with existing snapshots
(show details)
Symptom |
Configuration |
Environment |
Systems configured with Hyperswap topology with snapshots |
Trigger |
Trying to create a Hyperswap volume from an existing VDisk |
Workaround |
None |
|
8.7.0.0 |
HyperSwap |
SVAPAR-129318 |
All |
High Importance
|
A Storage Virtualize cluster configured without I/O group 0 is unable to send performance metrics
(show details)
Symptom |
Configuration |
Environment |
Any cluster that does not have I/O group 0 configured |
Trigger |
None |
Workaround |
Configure I/O group 0 |
|
8.7.0.0 |
Performance |
SVAPAR-131233 |
SVC |
High Importance
|
In an SVC stretched-cluster configuration with multiple I/O groups and policy-based replication, an attempt to create a new volume may fail due to an incorrect automatic I/O group assignment.
(show details)
Symptom |
Configuration |
Environment |
SVC stretched cluster with Policy Based Replication |
Trigger |
Creating a volume |
Workaround |
Use the 'mkvolume' CLI command and specify the correct -iogrp parameter |
|
8.7.0.0 |
Policy-based Replication |
SVAPAR-131651 |
All |
High Importance
|
Policy-based Replication got stuck after both nodes in the I/O group on a target system restarted at the same time
(show details)
Symptom |
Loss of Redundancy |
Environment |
Target system using Policy-based Replication. |
Trigger |
Both nodes in the I/O group on a target system restart at the same time. |
Workaround |
None |
|
8.7.0.0 |
Policy-based Replication |
SVAPAR-132013 |
All |
High Importance
|
On a Hyperswap system, the preferred site node can lease expire if the remote site nodes suffered a warmstart.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Hyperswap configurations |
Trigger |
If the remote site node warmstarts |
Workaround |
None |
|
8.7.0.0 |
HyperSwap, Quorum |
SVAPAR-137096 |
All |
High Importance
|
An issue with the TPM on FS50xx may cause a chsystemcert command to fail.
(show details)
Symptom |
Configuration |
Environment |
FS50xx systems |
Trigger |
Running 'chsystemcert' |
Workaround |
Until a fix is available, if a new certificate needs to be generated, it may be necessary to reboot both node canisters to prevent the issue recurring. |
|
8.7.0.0 |
Command Line Interface |
SVAPAR-142191 |
All |
High Importance
|
When a child pool contains thin-provisioned volumes, running out of space in the child pool may cause volumes outside the child pool to be taken offline.
(show details)
Symptom |
Offline Volumes |
Environment |
Systems configured with thin-provisioned volumes in child pools |
Trigger |
Out-of-space condition in a child pool |
Workaround |
Monitor free capacity in a child pool and add extra capacity if needed, to avoid out-of-space conditions. |
|
8.7.0.0 |
Thin Provisioning |
SVAPAR-98497 |
All |
High Importance
|
Excessive SSH logging may cause the Configuration node boot drive to become full. The node will go offline with error 565, indicating a boot drive failure
(show details)
Symptom |
Configuration |
Environment |
Any system that is being monitored by an external monitoring systems |
Trigger |
Customers using external monitoring systems such as Zabbix that use SSH to log in multiple times a second maybe effected |
Workaround |
None |
|
8.7.0.0 |
System Monitoring |
SVAPAR-98893 |
All |
High Importance
|
If an external storage controller has over-provisioned storage (for example a FlashSystem with an FCM array), the system may incorrectly display usable capacity data for mdisks from that controller. If connectivity to the storage controller is lost, node warmstarts may occur
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running 8.6.0.0 only, with an external controller that has over-provisioned storage |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
Storage Virtualisation |
SVAPAR-99175 |
All |
High Importance
|
A node may warmstart due to an invalid queuing mechanism in cache. This can cause IO in cache to be in the same processing queue more than once.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Any cluster on code below 8.6.1 |
Trigger |
Can happen when IO in cache is being processed |
Workaround |
None |
|
8.7.0.0 |
Cache |
SVAPAR-99354 |
All |
High Importance
|
Missing policing in the 'startfcconsistgrp' command for volumes using volume group snapshots, resulting in node warmstarts when creating a new volume group snapshot
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Any system configured with Volume Group Snapshot and FlashCopy |
Trigger |
Adding a Volume Group Snapshot when a downstream legacy FlashCopy map exists. |
Workaround |
Only create a Volume Group Snapshot if a downstream legacy FlashCopy map does not exist |
|
8.7.0.0 |
FlashCopy |
SVAPAR-99537 |
All |
High Importance
|
If a hyperswap volume copy is created in a DRP child pool, and the parent pool has FCM storage, the change volumes will be created as thin-provisioned instead of compressed
(show details)
Symptom |
Configuration |
Environment |
Systems with DRP child pools and FCM storage |
Trigger |
Creating a change volume in a DRP child pool when the parent pool contains FCMs |
Workaround |
None |
|
8.7.0.0 |
Data Reduction Pools |
SVAPAR-99997 |
All |
High Importance
|
Creating a volume group from a snapshot whose index is greater than 255 may cause incorrect output from 'lsvolumegroup'
(show details)
Symptom |
Configuration |
Environment |
Systems using Volume Group Snapshots |
Trigger |
Creating a volume group from a snapshot whose index is greater than 255 |
Workaround |
None |
|
8.7.0.0 |
FlashCopy |
HU01222 |
All |
Suggested
|
FlashCopy entries in the eventlog always have an object ID of 0, rather then show the correct object ID
(show details)
Symptom |
None |
Environment |
Any system configured with FlashCopy groups |
Trigger |
None |
Workaround |
Use the 'Info' event nearest to the 'config' event to determine which fcgrp was stopped. |
|
8.7.0.0 |
FlashCopy |
SVAPAR-100924 |
FS9500, SVC |
Suggested
|
After the battery firmware is updated, either using the utility or by upgrading to a version with newer firmware, the battery LED may be falsely illuminated.
(show details)
Symptom |
None |
Environment |
SV3 or FS9500 systems |
Trigger |
Battery firmware update |
Workaround |
Manually toggle the battery LED with the following commands. 'satask chnodeled -on -battery <ID>' and 'satask chnodeled -off -battery <ID>' |
|
8.7.0.0 |
|
SVAPAR-100958 |
All |
Suggested
|
A single FCM may incorrectly report multiple medium errors for the same LBA
(show details)
Symptom |
Performance |
Environment |
Predominantly FCM2, but could also affect other FCM generations |
Trigger |
None |
Workaround |
After the problem is detected, manually fail the FCM, format it and then insert back into the array. After the copyback has completed ensure to update all FCMs to the recommended firmware level |
|
8.7.0.0 |
RAID |
SVAPAR-102271 |
All |
Suggested
|
Enable IBM Storage Defender integration for Data Reduction Pools
(show details)
Symptom |
None |
Environment |
None |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
Interoperability |
SVAPAR-102382 |
All |
Suggested
|
Fibre Channel Read Diagnostic Parameters (RDP) indicates that a short wave SFP is installed when infact an long wave SFP is installed.
(show details)
Symptom |
Configuration |
Environment |
Systems with long wave SFPs running 8.5.2.0 or higher. |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
System Monitoring |
SVAPAR-106693 |
FS9500 |
Suggested
|
Remote Support Assistance (RSA) cannot be enabled on FS9500 systems with MTM 4983-AH8
(show details)
Symptom |
Configuration |
Environment |
FS9500 systems with MTM 4983-AH8 |
Trigger |
Trying to enable RSA on MTM 4983-AH8 |
Workaround |
None |
|
8.7.0.0 |
Support Remote Assist |
SVAPAR-107558 |
All |
Suggested
|
A Volume Group Snapshot (VGS) trigger may collide with a GMCV or Policy based Replication cycle causing the VGS trigger to fail.
(show details)
Symptom |
Configuration |
Environment |
Volume Group Snapshots with GMCV or Policy Based Replication |
Trigger |
Trigger a Volume Group Snapshot |
Workaround |
None |
|
8.7.0.0 |
FlashCopy, Global Mirror With Change Volumes, Policy-based Replication |
SVAPAR-107733 |
All |
Suggested
|
The 'mksnmpserver' command fails with 'CMMVC5711E [####] is not valid data' if auth passphrase contains special characters, such as '!'
(show details)
Symptom |
Configuration |
Environment |
Any system running on or after v8.3.1.9, v8.4.0.10, or v8.5.0.7 |
Trigger |
Using an auth passphrase containing special characters to execute the 'mksnmpserver' command |
Workaround |
Do not include special characters in the auth passphrase |
|
8.7.0.0 |
|
SVAPAR-107852 |
All |
Suggested
|
A Policy-Based High Availability node may warmstart during IP quorum disconnect and reconnect operations.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using Policy-Based High Availability and IP quorum |
Trigger |
IP quorum disconnecting and reconnecting |
Workaround |
None |
|
8.7.0.0 |
IP Quorum |
SVAPAR-108469 |
All |
Suggested
|
A single node warmstart may occur on nodes configure to use a secured IP partnership
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems with a secured IP partnership |
Trigger |
Service Association goes down |
Workaround |
None |
|
8.7.0.0 |
IP Replication |
SVAPAR-108476 |
All |
Suggested
|
Remote users with public SSH keys configured cannot failback to password authentication.
(show details)
Symptom |
Configuration |
Environment |
Systems that have remote users configured that use public SSH keys |
Trigger |
Remote users configured with public SSH keys |
Workaround |
None |
|
8.7.0.0 |
Security |
SVAPAR-108551 |
All |
Suggested
|
An expired token in the GUI file upload process can cause the upgrade to not start automatically after the file is successfully uploaded.
(show details)
Symptom |
Configuration |
Environment |
None |
Trigger |
Start a code upgrade via GUI |
Workaround |
Upgrade via the CLI. The other option is to log out from the GUI, then log back in to re-authenticate, then go back to the Update System view. On the Test and Update modal, select the test utility and update image files that are already on the system from the previous upload (without selecting to upload them again) |
|
8.7.0.0 |
System Update |
SVAPAR-109289 |
All |
Suggested
|
Buffer overflow may occur when handling the maximum length of 55 characters for either Multi-Factor Authentication (MFA) or Single Sign On (SSO) client secrets
(show details)
Symptom |
Configuration |
Environment |
Systems that use MFA or SSO |
Trigger |
Using a client secret with > 55 characters |
Workaround |
Use less then 55 characters for the client secret |
|
8.7.0.0 |
Backend Storage |
SVAPAR-110059 |
All |
Suggested
|
When using Storage Insights without a data collector, an attempt to collect a snap using Storage Insights may fail.
(show details)
Symptom |
None |
Environment |
Storage Virtualize system running 8.5.4 or higher and using Storage Insights without a data collector |
Trigger |
Initiating a Support Package collection using Storage Insights |
Workaround |
Disable then re-enble the Cloud Callhome service using the 'svctask chcloudcallhome -disable' and 'svctask chcloudcallhome -enable'. This will function until the next time the configuration node fails over |
|
8.7.0.0 |
Support Data Collection |
SVAPAR-110309 |
All |
Suggested
|
When a volume group is assigned to an ownership group, and has a snapshot policy associated, running the 'lsvolumegroupsnapshotpolicy' or 'lsvolumegrouppopulation' command whilst logged in as an ownership group user, will cause a Config node to warmstart.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Using a combination of volume groups, volume group snapshot polices and ownership groups |
Trigger |
Running the 'lsvolumegroupsnapshotpolicy' or 'lsvolumegrouppopulation' command |
Workaround |
None |
|
8.7.0.0 |
FlashCopy |
SVAPAR-110745 |
All |
Suggested
|
Policy-based Replication (PBR) snapshots and Change Volumes are factored into the preferred node assignment. This can lead to a perceived imbalance of the distribution of preferred node assignments.
(show details)
Symptom |
Configuration |
Environment |
Any system running policy-based Replication |
Trigger |
Policy-based Replication enabled |
Workaround |
None |
|
8.7.0.0 |
Policy-based Replication |
SVAPAR-110749 |
All |
Suggested
|
There is an issue when configuring volumes using the wizard, the underlying command that is called is the 'mkvolume' command rather than the previous 'mkvdisk' command. With 'mkvdisk' it was possible to format the volumes, whereas with 'mkvolume' it is not possible
(show details)
Symptom |
Configuration |
Environment |
Definition of multiple volumes |
Trigger |
Defining volumes via management GUI. |
Workaround |
Run the 'mkvdisk' command manually. |
|
8.7.0.0 |
|
SVAPAR-111021 |
All |
Suggested
|
Unable to load resource page in GUI if the IO group ID:0 does not have any nodes.
(show details)
Symptom |
None |
Environment |
Any systems that have no nodes in IO group ID:0. |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
System Monitoring |
SVAPAR-111187 |
All |
Suggested
|
There is an issue if the browser language is set to French, that can cause the SNMP server creation wizard not to be displayed.
(show details)
Symptom |
None |
Environment |
Any system where the browser language is set to French |
Trigger |
None |
Workaround |
Switching the browser language to English, or use the CLI to configure SNMP |
|
8.7.0.0 |
System Monitoring |
SVAPAR-111239 |
All |
Suggested
|
In rare situations it is possible for a node running Global Mirror with Change Volumes (GMCV) to assert
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running GMCV |
Trigger |
Any system running GMCV |
Workaround |
None |
|
8.7.0.0 |
Global Mirror With Change Volumes |
SVAPAR-111989 |
All |
Suggested
|
Downloading software with a Fix ID longer than 64 characters fails with an error
(show details)
Symptom |
None |
Environment |
None |
Trigger |
Downloading a software package from Fix Central that has a Fix ID greater than 64 characters long |
Workaround |
None |
|
8.7.0.0 |
Support Remote Assist |
SVAPAR-111991 |
All |
Suggested
|
Attempting to create a truststore fails with a CMMVC5711E error if the certificate file does not end with a newline character
(show details)
Symptom |
Configuration |
Environment |
Systems using policy-based Replication, secured IP partnerships or VASA. |
Trigger |
Attempting to create a truststore |
Workaround |
Ensure the certificate file ends with a newline character |
|
8.7.0.0 |
IP Replication, Policy-based Replication, vVols |
SVAPAR-111992 |
All |
Suggested
|
Unable to configure policy-based Replication using the GUI, if truststore contains blank lines or CRLF line endings
(show details)
Symptom |
Configuration |
Environment |
Systems configured with policy-based Replication |
Trigger |
Attempting to configure policy-based Replication |
Workaround |
Ensure the certificate file used to create the truststore contains no blank lines and uses LF line endings instead of CRLF line endings. |
|
8.7.0.0 |
Graphical User Interface, Policy-based Replication |
SVAPAR-112243 |
All |
Suggested
|
Prior to 8.4.0 NTP was used. After 8.4.0 this was changed to 'chronyd'. When upgrading from a lower level to 8.4 or higher, systems may experience compatibility issues.
(show details)
Symptom |
Configuration |
Environment |
Systems that have upgrade to 8.4 or higher |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
|
SVAPAR-112711 |
All |
Suggested
|
IBM Storage Virtualize user interface code will not respond to a malformed HTTP POST with expected HTTP 401 message.
(show details)
Symptom |
None |
Environment |
IBM Storage Virtualize GUI |
Trigger |
Malformed HTTP POST |
Workaround |
None |
|
8.7.0.0 |
Graphical User Interface |
SVAPAR-112712 |
SVC |
Suggested
|
The Cloud Call Home function will not restart on SVC clusters that were initially created with CG8 hardware and upgraded to 8.6.0.0 and above.
(show details)
Symptom |
None |
Environment |
SVC cluster that has been upgraded from CG8 hardware. |
Trigger |
Upgrading SVC cluster |
Workaround |
None |
|
8.7.0.0 |
Call Home |
SVAPAR-113792 |
All |
Suggested
|
Node assert may occur when outbound IPC message such as nslookup to a DNS server timeouts
(show details)
Symptom |
Single Node Warmstart |
Environment |
Any system running 8.6.0.x or higher |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
|
SVAPAR-114081 |
All |
Suggested
|
The lsfabric command may show FC port logins which no longer exist. In large environments with many devices attached to the SAN, this may result in an incorrect 1800 error being reported, indicating that a node has too many logins.
(show details)
Symptom |
Configuration |
Environment |
None |
Trigger |
None |
Workaround |
Warmstart each node in turn, to remove the invalid entries from lsfabric. |
|
8.7.0.0 |
Reliability Availability Serviceability |
SVAPAR-114086 |
SVC |
Suggested
|
Incorrect IO group memory policing for volume mirroring in the GUI for SVC SV3 hardware.
(show details)
Symptom |
Configuration |
Environment |
2145-SV3 hardware |
Trigger |
Attempting to increase volume mirroring memory allocation in the GUI. |
Workaround |
Perform the action via the CLI instead. |
|
8.7.0.0 |
Volume Mirroring |
SVAPAR-114145 |
All |
Suggested
|
A timing issue triggered by disabling an IP partnership's compression state while replication is running may cause a node to warmstart.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems configured with IP replication. |
Trigger |
Disabling an IP partnership's compression state whilst replication is running. |
Workaround |
Stop replication before changing the IP partnership's compression state |
|
8.7.0.0 |
IP Replication |
SVAPAR-116265 |
All |
Suggested
|
When upgrading memory on a node, it may repeatedly reboot if not removed from the cluster before shutting the node down and adding additional memory.
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
GEN3 or newer node hardware. |
Trigger |
Not first removing the node from the cluster before shutting it down and adding additional memory. |
Workaround |
Remove the node first from the cluster before shutting it down and adding additional memory. |
|
8.7.0.0 |
Reliability Availability Serviceability |
SVAPAR-117663 |
All |
Suggested
|
The last backup time for a safeguarded volume group within the Volume Groups view does not display the correct time.
(show details)
Symptom |
None |
Environment |
None |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
Graphical User Interface |
SVAPAR-120156 |
FS5000, FS5100, FS5200, FS7200, FS7300, SVC |
Suggested
|
An internal process introduced in 8.6.0 to collect iSCSI port statistics can cause host performance to be affected
(show details)
Symptom |
Single Node Warmstart |
Environment |
8.6.0.x and higher code level with 25Gb ethernet adapters |
Trigger |
Any system installed with 25Gb ethernet adapters that performs VMware clone or vMotion or Windows ODX file copy operations |
Workaround |
None |
|
8.7.0.0 |
Performance, iSCSI |
SVAPAR-120359 |
All |
Suggested
|
Single node warmstart when using FlashCopy maps on volumes configured for Policy-based Replication
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems using FlashCopy maps on volumes configured for Policy-based Replication |
Trigger |
The single node warmstart has a low risk of occurring if policy-based replication runs in cycling mode. |
Workaround |
Make volume groups with replication policies independent, or stop the partnership |
|
8.7.0.0 |
FlashCopy, Policy-based Replication |
SVAPAR-120399 |
All |
Suggested
|
A host WWPN incorrectly shows as being still logged into the storage when it is not.
(show details)
Symptom |
Configuration |
Environment |
Systems using Fibre Channel host connections. |
Trigger |
Disabling or removing a host fibre channel connection. |
Workaround |
None |
|
8.7.0.0 |
Reliability Availability Serviceability |
SVAPAR-120495 |
All |
Suggested
|
A node can experience performance degradation, if using the embedded VASA provider, thereby leading to a potential single node warmstart.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems running with Embedded VASA provider. |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
|
SVAPAR-120610 |
All |
Suggested
|
Excessive 'chfcmap' commands can result in multiple node warmstarts occurring
(show details)
Symptom |
Multiple Node Warmstarts |
Environment |
Any systems configured with flashcopy. |
Trigger |
Performing excessive 'chfcmap' commands |
Workaround |
None |
|
8.7.0.0 |
FlashCopy |
SVAPAR-120639 |
All |
Suggested
|
The vulnerability scanner claims cookies were set without HttpOnly flag.
(show details)
Symptom |
Configuration |
Environment |
On port 442, the secure flag from SSL cookie is not set from SSL cookie and the HttpOnly Flag is not set from the cookie. |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
|
SVAPAR-120732 |
All |
Suggested
|
Unable to expand vdisk from GUI as constant values for the compressed and regular pool volume disk maximum capacity were incorrect in the constant file.
(show details)
Symptom |
Configuration |
Environment |
IBM FlashSystem |
Trigger |
None |
Workaround |
Perform the action via the CLI |
|
8.7.0.0 |
Graphical User Interface |
SVAPAR-120925 |
All |
Suggested
|
A single node assert may occur due to a timing issue related to thin provisioned volumes in a traditional pool.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems with thin provisioned volumes in a traditional pool. |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
Thin Provisioning |
SVAPAR-121334 |
All |
Suggested
|
Packets with unexpected size are received on the ethernet interface. This causes the internal buffers to become full, thereby causing a node to warmstart to clear the condition
(show details)
Symptom |
Single Node Warmstart |
Environment |
Effects systems running 8.6.x |
Trigger |
Storage code expects packet of size 128 but the initiator is sending a packet of size 110 which causes the node to warmstart |
Workaround |
None |
|
8.7.0.0 |
NVMe |
SVAPAR-122411 |
All |
Suggested
|
A node may assert when a vdisk has been expanded and rehome has not made aware of the possible change of the number of regions it may have to rehome.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Any system configure with Data Reduction Pools |
Trigger |
Any command that will expand the size of a vdisk such as 'expandvdisksize'. |
Workaround |
None |
|
8.7.0.0 |
Data Reduction Pools |
SVAPAR-123644 |
All |
Suggested
|
A system with NVMe drives may falsely log an error indicating a Flash drive has high write endurance usage. The error cannot be cleared.
(show details)
Symptom |
Configuration |
Environment |
Systems with NVMe drives |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
Reliability Availability Serviceability |
SVAPAR-126742 |
All |
Suggested
|
A 3400 error (too many compression errors) may be logged incorrectly, due to an incorrect threshold. The error can be ignored on code levels which do not contain this fix.
(show details)
Symptom |
Configuration |
Environment |
Systems using DRP compression |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
Compression, Data Reduction Pools |
SVAPAR-127835 |
All |
Suggested
|
A node may warmstart due to invalid RDMA receive size of zero.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems configure with NVMeF over RDMA |
Trigger |
RDMA receiving size of zero detected due to API error |
Workaround |
None |
|
8.7.0.0 |
NVMe |
SVAPAR-127844 |
All |
Suggested
|
The user is informed that a snapshot policy cannot be assigned. The error message CMMVC9893E is displayed.
(show details)
Symptom |
None |
Environment |
Any system that is configured to use snapshots |
Trigger |
Assigning a policy to a volume group |
Workaround |
None |
|
8.7.0.0 |
FlashCopy |
SVAPAR-128010 |
FS7300, FS9500 |
Suggested
|
A node warmstart can sometimes occur due to a timeout on certain fibre channel adapters
(show details)
Symptom |
Single Node Warmstart |
Environment |
Can affect both FS7300 and FS9500 |
Trigger |
None |
Workaround |
Update the firmware |
|
8.7.0.0 |
Fibre Channel |
SVAPAR-128414 |
FS5100, FS5200, FS7200, FS7300, FS9100, FS9200, FS9500, SVC |
Suggested
|
Thin-clone volumes in a Data Reduction Pool will incorrectly have compression disabled, if the source volume was uncompressed.
(show details)
Symptom |
None |
Environment |
Any system that has thin-clone volumes and hardware compression is available |
Trigger |
Creating a thin-clone in a Data Reduction Pool. |
Workaround |
None |
|
8.7.0.0 |
Compression, FlashCopy |
SVAPAR-129111 |
All |
Suggested
|
When using the GUI, the IPV6 field is not wide enough, thereby causing the user to scroll right to see the full IPV6 address.
(show details)
Symptom |
None |
Environment |
None |
Trigger |
None |
Workaround |
Use the CLI instead |
|
8.7.0.0 |
Graphical User Interface |
SVAPAR-130646 |
All |
Suggested
|
False positive Recovery point Objective (RPO) exceeded events (52004) reported for volume groups configured with Policy-Based Replication
(show details)
Symptom |
Configuration |
Environment |
Systems configured with Policy-Based Replication |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
Policy-based Replication |
SVAPAR-131212 |
All |
Suggested
|
The GUI partnership properties dialog crashes if the issuer certificate does not have an organization field
(show details)
Symptom |
Configuration |
Environment |
Systems with a partnership |
Trigger |
Opening the partnership property dialog or creating a partnership while the issuer certificate has no organization field |
Workaround |
When using an externally signed certificate, make sure the issuer certificate has a non-empty organization (O=) field |
|
8.7.0.0 |
Policy-based Replication |
SVAPAR-131250 |
All |
Suggested
|
The system may not correctly balance fibre channel workload over paths to a back end controller.
(show details)
Symptom |
Performance |
Environment |
Systems with storage on a back end controller. |
Trigger |
None |
Workaround |
Reboot the node or node canister showing imbalanced path workload. |
|
8.7.0.0 |
Backend Storage |
SVAPAR-131865 |
All |
Suggested
|
A system may encounter communication issues when being configured with IPv6.
(show details)
Symptom |
Configuration |
Environment |
Systems using IPV6 |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
|
SVAPAR-131993 |
All |
Suggested
|
The IPV6 GUI field has been extended to accomodate the full length of the IPV6 address.
(show details)
Symptom |
None |
Environment |
None |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
Graphical User Interface |
SVAPAR-131994 |
All |
Suggested
|
When implementing Safeguarded Copy, the associated child pool may run out of space, which can cause multiple Safeguarded Copies to go offline. This can cause the node to warmstart.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Systems that are being configured with Safeguarded Copy |
Trigger |
Not enough space dedicated to FlashCopy target |
Workaround |
Configure more space for FlashCopy |
|
8.7.0.0 |
Safeguarded Copy & Safeguarded Snapshots |
SVAPAR-132001 |
All |
Suggested
|
Unexpected lease expiries may occur when half of the nodes in the system start up, one after another in a short time.
(show details)
Symptom |
None |
Environment |
None |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
|
SVAPAR-132003 |
All |
Suggested
|
A node may warmstart when an internal process to collect information from Ethernet ports takes longer than expected..
(show details)
Symptom |
Single Node Warmstart |
Environment |
Any system configured with iSCSI / iSER hosts or IP replication |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
IP Replication, iSCSI |
SVAPAR-132011 |
All |
Suggested
|
In rare situations, a hosts WWPN may show incorrectly as still logged into the storage even though it is not. This can cause the host to incorrectly appear as degraded.
(show details)
Symptom |
Configuration |
Environment |
Systems using Fibre Channel host connections. |
Trigger |
Disabling or removing a host fibre channel connection. |
Workaround |
None |
|
8.7.0.0 |
Fibre Channel, Reliability Availability Serviceability |
SVAPAR-132062 |
All |
Suggested
|
vVols are reported as inaccessible due to a 30 minute timeout if the VASA provider is unavailable
(show details)
Symptom |
Offline Volumes |
Environment |
This effects VMware environments |
Trigger |
The VASA provider is unavailable |
Workaround |
None |
|
8.7.0.0 |
vVols |
SVAPAR-132072 |
All |
Suggested
|
A node may assert due to a Fibre Channel port constantly flapping between the FlashSystem and the host.
(show details)
Symptom |
Single Node Warmstart |
Environment |
None |
Trigger |
Fibre Channel port constantly flapping between the FlashSystem and the host. |
Workaround |
Replace either the SFP or Fiber optic, or both, between FlashSystem and the host |
|
8.7.0.0 |
Fibre Channel |
SVAPAR-143574 |
All |
Suggested
|
It is possible for a battery register read to fail, causing a battery to unexpectedly be reported as offline. The issue will persist until the node is rebooted.
(show details)
Symptom |
Loss of Redundancy |
Environment |
None |
Trigger |
None |
Workaround |
Reboot the node to resolve the issue. |
|
8.7.0.0 |
Reliability Availability Serviceability |
SVAPAR-89271 |
All |
Suggested
|
Policy-based Replication is not achieving the link_bandwidth_mbits configured on the partnership if only a single volume group is replicating in an I/O group, or workload is not balanced equally between volume groups owned by both nodes.
(show details)
Symptom |
Performance |
Environment |
Systems using Policy-based Replication with a single replicating volume group in an IO group, or unbalanced workload. |
Trigger |
None |
Workaround |
Create at least two volume groups that are actively replicating in an I/O group. Alternatively, double the link_bandwidth_mbits if the configuration means only a single volume group is replicating in an I/O group. For systems with multiple volume groups, balance the workload evenly. |
|
8.7.0.0 |
Policy-based Replication |
SVAPAR-95384 |
All |
Suggested
|
In very rare circumstances, a timing window may cause a single node warmstart when creating a volume using policy-based replication
(show details)
Symptom |
Single Node Warmstart |
Environment |
Any system configured with Policy-Based Replication |
Trigger |
Running the 'mkvolume' command |
Workaround |
None |
|
8.7.0.0 |
Policy-based Replication |
SVAPAR-96777 |
All |
Suggested
|
Policy-based Replication uses journal resources to handle replication. If these resources become exhausted, the volume groups with the highest RPO and most amount of resources should be purged to free up resources for other volume groups. The decision which volume groups to purge is made incorrectly, potentially causing too many volume groups to exceed their target RPO
(show details)
Symptom |
Loss of Redundancy |
Environment |
Any systems running Policy-based Replication |
Trigger |
Journal purge with Policy-based Replication, e.g. link issue or performance issue |
Workaround |
None |
|
8.7.0.0 |
Policy-based Replication |
SVAPAR-96952 |
All |
Suggested
|
A single node warmstart may occur when updating the login counts associated with a backend controller.
(show details)
Symptom |
Single Node Warmstart |
Environment |
Any system with backend external controllers. |
Trigger |
None |
Workaround |
None |
|
8.7.0.0 |
Backend Storage |
SVAPAR-97502 |
All |
Suggested
|
Configurations that use Policy-based Replication with standard pool change volumes will raise space usage warnings
(show details)
Symptom |
None |
Environment |
This issue can only be triggered when using Policy-based Replication with standard pools. The issue is not presented within DRP environments |
Trigger |
System that use Policy-based Replication within a standard pool whilst running 8.5.2.0 - 8.6.0.0 |
Workaround |
None |
|
8.7.0.0 |
Policy-based Replication |
SVAPAR-98128 |
All |
Suggested
|
A single node warmstart may occur on upgrade to 8.6.0.0, on SA2 nodes with 25Gb ethernet adapters
(show details)
Symptom |
Single Node Warmstart |
Environment |
SA2 nodes with a 25Gb ethernet adapters |
Trigger |
Upgrading to 8.6.0.0 |
Workaround |
None |
|
8.7.0.0 |
System Update |
SVAPAR-98576 |
All |
Suggested
|
Customers cannot edit certain properties of a flashcopy mapping via the GUI flashcopy mappings panel as the edit modal does not appear.
(show details)
Symptom |
Configuration |
Environment |
None |
Trigger |
None |
Workaround |
Use the CLI instead |
|
8.7.0.0 |
FlashCopy, Graphical User Interface |
SVAPAR-98611 |
All |
Suggested
|
The system returns an incorrect retry delay timer for a SCSI BUSY status response to AIX hosts when an attempt is made to access a VDisk that is not mapped to the host
(show details)
Symptom |
Loss of Access to Data |
Environment |
AIX hosts |
Trigger |
Trying to access an unmapped VDisk from an AIX host |
Workaround |
None |
|
8.7.0.0 |
Interoperability |