Release Note for V9000 Family Block Storage Products


This release note applies to the following systems: This is the release note for the 7.7.1 release and details the issues resolved in all Program Temporary Fixes (PTFs) between 7.7.1.0 and 7.7.1.9. This document will be updated with additional information whenever a PTF is released.

This document was last updated on 5 June 2020.

  1. New Features
  2. Known Issues and Restrictions
  3. Issues Resolved
    1. Security Issues Resolved
    2. APARs Resolved
  4. Supported upgrade paths
  5. Useful Links
Note. Detailed build version numbers are included in the Update Matrices in the Useful Links section

1. New Features

The following new features have been introduced in the 7.7.1 release:

2. Known Issues and Restrictions

Details Introduced

In the GUI, when filtering volumes by host, if there are more than 50 host objects, then the host list will not include the hosts’ names.

This issue will be fixed in a future PTF.

7.7.1.7

Systems with encrypted managed disks cannot be upgraded to v7.7.1.5.

This is a temporary restriction that is policed by the software upgrade test utility. Please check this release note regularly for updates.

7.7.1.5

Host Disconnects Using VMware vSphere 5.5.0 Update 2 and vSphere 6.0.

Refer to this flash for more information

n/a

If an update stalls or fails then Contact IBM Support for Further Assistance

n/a

3. Issues Resolved

This release contains all of the fixes included in the 7.7.0.1 release, plus the following additional fixes.

A release may contain fixes for security issues, fixes for APARs/FLASHs or both. Consult both tables below to understand the complete set of fixes included in the release.

3.1 Security Issues Resolved

Security issues are documented using a reference number provided by "Common Vulnerabilities and Exposures" (CVE). º
CVE Identifier Link for additional Information Resolved in
CVE-2016-10708 ibm10717661 7.7.1.9
CVE-2016-10142 ibm10717931 7.7.1.9
CVE-2017-11176 ibm10717931 7.7.1.9
CVE-2018-1433 ssg1S1012263 7.7.1.9
CVE-2018-1434 ssg1S1012263 7.7.1.9
CVE-2018-1438 ssg1S1012263 7.7.1.9
CVE-2018-1461 ssg1S1012263 7.7.1.9
CVE-2018-1462 ssg1S1012263 7.7.1.9
CVE-2018-1463 ssg1S1012263 7.7.1.9
CVE-2018-1464 ssg1S1012263 7.7.1.9
CVE-2018-1465 ssg1S1012263 7.7.1.9
CVE-2018-1466 ssg1S1012263 7.7.1.9
CVE-2016-6210 ssg1S1012276 7.7.1.9
CVE-2016-6515 ssg1S1012276 7.7.1.9
CVE-2013-4312 ssg1S1012277 7.7.1.9
CVE-2015-8374 ssg1S1012277 7.7.1.9
CVE-2015-8543 ssg1S1012277 7.7.1.9
CVE-2015-8746 ssg1S1012277 7.7.1.9
CVE-2015-8812 ssg1S1012277 7.7.1.9
CVE-2015-8844 ssg1S1012277 7.7.1.9
CVE-2015-8845 ssg1S1012277 7.7.1.9
CVE-2015-8956 ssg1S1012277 7.7.1.9
CVE-2016-2053 ssg1S1012277 7.7.1.9
CVE-2016-2069 ssg1S1012277 7.7.1.9
CVE-2016-2384 ssg1S1012277 7.7.1.9
CVE-2016-2847 ssg1S1012277 7.7.1.9
CVE-2016-3070 ssg1S1012277 7.7.1.9
CVE-2016-3156 ssg1S1012277 7.7.1.9
CVE-2016-3699 ssg1S1012277 7.7.1.9
CVE-2016-4569 ssg1S1012277 7.7.1.9
CVE-2016-4578 ssg1S1012277 7.7.1.9
CVE-2016-4581 ssg1S1012277 7.7.1.9
CVE-2016-4794 ssg1S1012277 7.7.1.9
CVE-2016-5412 ssg1S1012277 7.7.1.9
CVE-2016-5828 ssg1S1012277 7.7.1.9
CVE-2016-5829 ssg1S1012277 7.7.1.9
CVE-2016-6136 ssg1S1012277 7.7.1.9
CVE-2016-6198 ssg1S1012277 7.7.1.9
CVE-2016-6327 ssg1S1012277 7.7.1.9
CVE-2016-6480 ssg1S1012277 7.7.1.9
CVE-2016-6828 ssg1S1012277 7.7.1.9
CVE-2016-7117 ssg1S1012277 7.7.1.9
CVE-2016-10229 ssg1S1012277 7.7.1.9
CVE-2016-0634 ssg1S1012278 7.7.1.9
CVE-2017-5647 ssg1S1010892 7.7.1.7
CVE-2017-5638 ssg1S1010113 7.7.1.6
CVE-2016-6796 ssg1S1010114 7.7.1.6
CVE-2016-6816 ssg1S1010114 7.7.1.6
CVE-2016-6817 ssg1S1010114 7.7.1.6
CVE-2016-2177 ssg1S1010115 7.7.1.6
CVE-2016-2178 ssg1S1010115 7.7.1.6
CVE-2016-2183 ssg1S1010115 7.7.1.6
CVE-2016-6302 ssg1S1010115 7.7.1.6
CVE-2016-6304 ssg1S1010115 7.7.1.6
CVE-2016-6306 ssg1S1010115 7.7.1.6
CVE-2016-5696 ssg1S1010116 7.7.1.6
CVE-2016-2834 ssg1S1010117 7.7.1.6
CVE-2016-5285 ssg1S1010117 7.7.1.6
CVE-2016-8635 ssg1S1010117 7.7.1.6
CVE-2016-2183 ssg1S1010205 7.7.1.6
CVE-2016-5546 ssg1S1010205 7.7.1.6
CVE-2016-5547 ssg1S1010205 7.7.1.6
CVE-2016-5548 ssg1S1010205 7.7.1.6
CVE-2016-5549 ssg1S1010205 7.7.1.6
CVE-2016-5385 ssg1S1009581 7.7.1.3
CVE-2016-5386 ssg1S1009581 7.7.1.3
CVE-2016-5387 ssg1S1009581 7.7.1.3
CVE-2016-5388 ssg1S1009581 7.7.1.3
CVE-2016-3092 ssg1S1009284 7.7.1.2
CVE-2016-4430 ssg1S1009282 7.7.1.0
CVE-2016-4431 ssg1S1009282 7.7.1.0
CVE-2016-4433 ssg1S1009282 7.7.1.0
CVE-2016-4436 ssg1S1009282 7.7.1.0
CVE-2016-4461 ssg1S1010883 7.7.1.0

3.2 APARs and Flashes Resolved

Reference Severity Description Resolved in Feature Tags
HU01866 S1 HIPER (Highly Pervasive): A faulty PSU sensor, in an AC3 node, can fill the sel log causing the service processor (BMC) to disable logging. If a snap is subsequently taken, from the node, a timeout will occur and it will be taken offline. It is possible for this to affect both nodes in an I/O group 7.7.1.9 System Monitoring
HU01767 S1 Reads of 4K/8K from an array can under exceptional circumstances return invalid data 7.7.1.9 RAID, Thin Provisioning
HU01771 S2 An issue with the CMOS battery in a node can cause an unexpectedly large log file to be generated by the BMC. At log collection the node may be taken offline 7.7.1.9 System Monitoring
HU01445 S3 Systems with heavily used RAID-1 or RAID-10 arrays may experience a node warmstart 7.7.1.9
HU01624 S3 GUI response can become very slow in systems with a large number of compressed and uncompressed volume 7.7.1.9 Graphical User Interface
HU01628 S3 In the GUI on the Volumes page whilst using the filter function some volumes entries may not be displayed until the page has completed loading 7.7.1.9 Graphical User Interface
HU01664 S3 A timing window issue during an upgrade can cause the node restarting to warmstart stalling the upgrade 7.7.1.9 System Update
HU01687 S3 For 'volumes by host' 'ports by host' and 'volumes by pool' pages in the GUI when the number of items is greater than 50 then the item name will not be displayed 7.7.1.9 Graphical User Interface
HU01698 S3 A node warmstart may occur when deleting a compressed volume if a host has written to the volume minutes before the volume is deleted. 7.7.1.9 Compression
HU01730 S3 When running the DMP for a 1046 error the picture may not indicate the correct position of the failed adapter 7.7.1.9 GUI Fix Procedure
HU01763 S3 A single node warmstart may occur on a DH8 config node when inventory email is created. The issue only occurs if this coincides with a very high rate of CLI commands and high I/O workload on the config node 7.7.1.9 System Monitoring, Command Line Interface
HU01706 S1 HIPER (Highly Pervasive): Areas of volumes written with all-zero data may contain non-zero data. For more details refer to the following Flash 7.7.1.8
HU00744 (Reverted) S3 This APAR has been reverted in light of issues with the fix. It will be re-applied in a future PTF 7.7.1.8
HU01239 & HU01255 & HU01586 S1 HIPER (Highly Pervasive): The presence of a faulty SAN component can delay lease messages between nodes leading to a cluster-wide lease expiry and consequential loss of access 7.7.1.7 Reliability Availability Serviceability
HU01505 S1 HIPER (Highly Pervasive): A non-redundant drive experiencing many errors can be taken offline, obstructing rebuild activity 7.7.1.7 Backend Storage, RAID
HU01646 S1 HIPER (Highly Pervasive): A new failure mechanism in the 16Gb HBA driver can under certain circumstances lead to a lease expiry of the entire cluster 7.7.1.7 Reliability Availability Serviceability
FLASH-22868 S1 Call Home no longer sends hardware events 7.7.1.7 System Monitoring
HU01267 S1 An unusual interaction between Remote Copy and FlashCopy can lead to both nodes in an I/O group warmstarting 7.7.1.7 Global Mirror With Change Volumes
HU01490 S1 When attempting to add/remove mulitple IQNs to/from a host the tables that record host-wwpn mappings can become inconsistent resulting in repeated node warmstarts across I/O groups 7.7.1.7 iSCSI
HU01519 S1 One PSU may silently fail leading to the possibility of a dual node reboot 7.7.1.7 Reliability Availability Serviceability
HU01528 S1 Both nodes may warmstart due to Sendmail throttling 7.7.1.7
HU01549 S1 During a system upgrade HyperV-clustered hosts may experience a loss of access to any iSCSI connected volumes 7.7.1.7 iSCSI, System Update
HU01572 S1 SCSI 3 commands from unconfigured WWPNs may result in multiple warmstarts leading to a loss of access 7.7.1.7 iSCSI
HU01635 S1 A slow memory leak in the host layer can lead to an out-of-memory condition resulting in offline volumes or performance degradation 7.7.1.7 Hosts, Performance
HU00762 S2 Due to an issue in the cache component nodes within an I/O group are not able to form a caching-pair and are serving I/O through a single node 7.7.1.7 Reliability Availability Serviceability
HU01416 S2 ISL configuration activity may cause a cluster-wide lease expiry 7.7.1.7 Reliability Availability Serviceability
HU01428 S2 Scheduling issue adversely affects performance resulting in node warmstarts 7.7.1.7 Reliability Availability Serviceability
HU01477 S2 Due to the way enclosure data is read it is possible for a firmware mismatch between nodes to occur during an upgrade 7.7.1.7 System Update
HU01488 S2 SAS transport errors on an enclosure slot can affect an adjacent slot leading to double drive failures 7.7.1.7 Drives
HU01506 S2 Creating a vdisk copy with the -autodelete option can cause a timer scheduling issue leading to node warmstarts 7.7.1.7 Volume Mirroring
HU01569 S2 When compression utilisation is high the config node may exhibit longer I/O response times than non-config nodes 7.7.1.7 Compression
HU01579 S2 In systems where all drives are of type HUSMM80xx0ASS20 it will not be possible to assign a quorum drive 7.7.1.7 Quorum, Drives
HU01609 & IT15343 S2 When the system is busy the compression component may be paged out of memory resulting in latency that can lead to warmstarts 7.7.1.7 Compression
HU01614 S2 After a node is upgraded hosts defined as TPGS may have paths set to inactive 7.7.1.7 Hosts
HU01636 S2 A connectivity issue with certain host SAS HBAs can prevent hosts from establishing stable communication with the storage controller 7.7.1.7 Hosts
HU01638 S2 When upgrading to v7.6 or later if there is another cluster in the same zone which is at v5.1 or earlier then nodes will warmstart and the upgrade will fail. 7.7.1.7 System Update
IT17564 S2 All nodes in an I/O group may warmstart when a DRAID array experiences drive failures 7.7.1.7 Distributed RAID
IT19726 S2 Warmstarts may occur when the attached SAN fabric is congested and HBA transmit paths becomes stalled preventing the HBA firmware from generating the completion for a FC command 7.7.1.7 Hosts
IT20627 S2 When Samsung RI drives are used as quorum disks a drive outage can occur 7.7.1.7 Quorum
IT21383 S2 Heavy I/O may provoke inconsistencies in resource allocation leading to node warmstarts 7.7.1.7 Reliability Availability Serviceability
HU00744 (Reverted in v7.7.1.8) S3 Single node warmstart due to an accounting issue within the cache component 7.7.1.7
HU00763 & HU01237 S3 A node warmstart may occur when a quorum disk is accessed at the same time as the login to that disk is closed 7.7.1.7 Quorum
HU01098 S3 Some older backend controller code levels do not support C2 commands resulting in 1370 entries in the Event Log for every detectmdisk 7.7.1.7 Backend Storage
HU01228 S3 Automatic T3 recovery may fail due to the handling of quorum registration generating duplicate entries 7.7.1.7 Reliability Availability Serviceability
HU01229 S3 The DMP for a 3105 event does not identify the correct problem canister 7.7.1.7
HU01332 S3 Performance monitor and Spectrum Control show zero CPU utilisation for compression 7.7.1.7 System Monitoring
HU01385 S3 A warmstart may occur if a rmvolumecopy or rmrcrelationship command are issued on a volume while I/O is being forwarded to the associated copy 7.7.1.7 HyperSwap
HU01430 S3 Memory resource shortages in systems with 8GB of RAM can lead to node warmstarts 7.7.1.7
HU01457 S3 In a hybrid V7000 cluster where one I/O group supports 10k volumes and another does not some operations on volumes may incorrectly be denied in the GUI 7.7.1.7 Graphical User Interface
HU01466 S3 Stretched cluster and HyperSwap I/O routing does not work properly due to incorrect ALUA data 7.7.1.7 HyperSwap, Hosts
HU01467 S3 Failures in the handling of performance statistics files may lead to missing samples in Spectrum Control and other tools 7.7.1.7 System Monitoring
HU01469 S3 Resource exhaustion in the iSCSI component can result in a node warmstart 7.7.1.7 iSCSI
HU01484 S3 During a RAID array rebuild there may be node warmstarts 7.7.1.7 RAID
HU01582 S3 A compression issue in IP replication can result in a node warmstart. 7.7.1.7 IP Replication
FLASH-21880 S1 HIPER (Highly Pervasive): After both a rebuild read failure and a data reconstruction failure, a SCSI read should fail 7.7.1.6
FLASH-12295 S1 Continuous and repeated loss of access of AC power on a PSU may, in rare cases, result in the report of a critical temperature fault. Using the provided cable secure mechanisms is highly recommended in preventing this issue 7.7.1.6 Reliability Availability Serviceability
HU01225 & HU01330 & HU01412 S1 Node warmstarts due to inconsistencies arising from the way cache interacts with compression 7.7.1.6 Compression, Cache
HU01474 S1 Host writes to a read-only secondary volume trigger I/O timeout warmstarts 7.7.1.6 Global Mirror, Global Mirror With Change Volumes, Metro Mirror
HU01479 S1 The handling of drive reseats can sometimes allow I/O to occur before the drive has been correctly failed resulting in offline MDisks 7.7.1.6 Distributed RAID
HU01483 S1 mkdistributedarray command may get stuck in the prepare state. Any interaction with the volumes in that array will result in multiple warmstarts 7.7.1.6 Distributed RAID
HU01500 S1 Node warmstarts can occur when the iSCSI Ethernet MTU is changed 7.7.1.6 iSCSI
HU01371 S2 A remote copy command related to HyperSwap may hang resulting in a warmstart of the config node 7.7.1.6 HyperSwap
HU01480 S2 Under some circumstances the config node does not fail over properly when using IPv6 adversely affecting management access via GUI and CLI 7.7.1.6 Graphical User Interface, Command Line Interface
FLASH-17306 S3 An array with no spare did not report as degraded when a flash module was pulled 7.7.1.6 Reliability Availability Serviceability
FLASH-21857 S3 Internal error found after upgrade 7.7.1.6 System Update
FLASH-22005 S3 Internal error encountered after the enclosure hit an out of memory error 7.7.1.6
FLASH-22143 S3 Improve stats performance to prevent SMNPwalk connection failure 7.7.1.6
HU01322 S3 Due to the way flashcard drives handle self-checking good status is not reported, resulting in 1370 errors in the Event Log 7.7.1.6 System Monitoring
HU01473 S3 Easy Tier migrates an excessive number of cold extents to an overloaded nearline array 7.7.1.6 EasyTier
HU01487 S3 Small increase in read response time for source volumes with incremental FC maps 7.7.1.6 FlashCopy, Global Mirror With Change Volumes
HU01498 S3 GUI may be exposed to CVE-2017-5638 (see Section 3.1) 7.7.1.6
IT18752 S3 When the config node processes an lsdependentvdisks command, issued via the GUI, that has a large number of objects in its parameters, it may warmstart 7.7.1.6 Graphical User Interface
FLASH-21920 S4 CLI and GUI don't get updated with the correct flash module firmware version after flash module replacement 7.7.1.6 Graphical User Interface, Command Line Interface
HU01193 S1 A drive failure whilst an array rebuild is in progress can lead to both nodes in an I/O group warmstarting 7.7.1.5 Distributed RAID
HU01340 S1 A port translation issue between v7.5 or earlier and v7.7.0 or later requires a T2 recovery to complete an upgrade 7.7.1.5 System Update
HU01382 S1 Mishandling of extent migration following a rmarray command can lead to multiple simultaneous node warmstarts with a loss of access 7.7.1.5 Distributed RAID
HU01392 S1 Under certain rare conditions FC mappings not in a consistency group can be added to a special internal consistency group resulting in a T2 recovery 7.7.1.5 FlashCopy
HU01461 S1 Arrays created using SAS attached 2.5 inch or 3.5 inch drives will not be encrypted. For more details refer to the following Flash 7.7.1.5 Encryption
HU01223 S2 The handling of a rebooted node's return to the cluster can occasionally become delayed resulting in a stoppage of inter-cluster relationships 7.7.1.5 Metro Mirror
HU01254 S2 A fluctuation of input AC power can cause a 584 error on a node 7.7.1.5 Reliability Availability Serviceability
HU01409 S2 Cisco Nexus 3000 switches at v5.0(3) have a defect which prevents a config node IP address changing in the event of a fail over 7.7.1.5 Reliability Availability Serviceability
HU01410 S2 An issue in the handling of FlashCopy map preparation can cause both nodes in an I/O group to be put into service state 7.7.1.5 FlashCopy
IT14917 S2 Node warmstarts due to a timing window in the cache component 7.7.1.5 Cache
HU00831 S3 Single node warmstart due to hung I/O caused by cache deadlock 7.7.1.5 Cache
HU01022 S3 Fibre channel adapter encountered a bit parity error resulting in a node warmstart 7.7.1.5 Hosts
HU01269 S3 A rare timing conflict between two process may lead to a node warmstart 7.7.1.5
HU01399 S3 For certain config nodes the CLI Help commands may not work 7.7.1.5 Command Line Interface
HU01405 S3 SSD drives with vendor ID IBM-C050 and IBM-C051 are showing up as not being supported 7.7.1.5
HU01432 S3 Single node warmstart due to an accounting issue within the cache component 7.7.1.5 Cache
IT18086 S3 When a vdisk is moved between I/O groups a node may warmstart 7.7.1.5
HU01783 S1 Replacing a failed drive in a DRAID array, with a smaller drive, may result in multiple T2 recoveries putting all nodes in service state with error 564 and/or 550 7.7.1.4 Distributed RAID
HU01347 S2 During an upgrade to v7.7.1 a deadlock in node communications can occur leading to a timeout and node warmstarts 7.7.1.4 Thin Provisioning
HU01379 S2 Resource leak in the handling of Read Intensive drives leads to offline volumes 7.7.1.4
HU01381 S2 A rare timing issue in FlashCopy may lead to a node warmstarting repeatedly and then entering a service state 7.7.1.4 FlashCopy
HU01400 S2 When upgrading a V9000 with external 12F SAS enclosures to 7.7.1 the upgrade will stall 7.7.1.4 System Update
HU01247 S3 When a FlashCopy consistency group is stopped more than once in rapid succession a node warmstart may result 7.7.1.4 Graphical User Interface
HU01323 S3 Systems using Volume Mirroring that upgrade to v7.7.1.x and have a storage pool go offline may experience a node warmstart 7.7.1.4 Volume Mirroring
HU01374 S3 Where an issue with Global Mirror causes excessive I/O delay a timeout may not function result in a node warmstart 7.7.1.4 Global Mirror
HU01226 S2 Changing max replication delay from the default to a small non-zero number can cause hung IOs leading to multiple node warmstarts and a loss of access 7.7.1.3 Global Mirror
HU01257 S2 Large (>1MB) write IOs to volumes can lead to a hung I/O condition resulting in node warmstarts 7.7.1.3
HU01386 S2 Where latency between sites is greater than 1ms host write latency can be adversely impacted. This may be more likely in the presence of large I/O transfer sizes or high IOPS 7.7.1.3 HyperSwap
HU01017 S3 The result of CLI commands are sometimes not promptly presented in the GUI 7.7.1.3 Graphical User Interface
HU01227 S3 High volumes of events may cause the email notifications to become stalled 7.7.1.3 System Monitoring
HU01234 S3 After upgrade to 7.6 or later iSCSI hosts may incorrectly be shown as offline in the CLI 7.7.1.3 iSCSI
HU01292 S3 Under some circumstances the re-calculation of grains to clean can take too long after a FlashCopy done event has been sent resulting in a node warmstart 7.7.1.3 FlashCopy
IT17102 S3 Where the maximum number of I/O requests for a FC port has been exceeded, if a SCSI command, with an unsupported opcode, is received from a host then the node may warmstart 7.7.1.3
HU01272 S1 Replacing a hard disk drive in an enclosure with a DRAID array can result in a T2 warmstart leading to a temporary loss of host access 7.7.1.2 Distributed RAID
HU01208 S1 After upgrading to v7.7 and later from v7.5 and earlier, and then creating a DRAID array with a node reset, the system may encounter repeated node warmstarts which will require a T3 recovery 7.7.1.1 Distributed RAID
FLASH-17809 S1 Single node warmstart when there are more than 8 enclosures 7.7.1.1
FLASH-18046 S1 Error 509 when enclosure powered up 7.7.1.1
FLASH-18609 S1 V9000 AE2 systems not sending Heartbeat to Service Center immediately after CCL 7.7.1.1
HU00271 S2 An extremely rare timing window condition in the way GM handles write sequencing may cause multiple node warmstarts 7.7.1.1 Global Mirror
HU00734 S2 Multiple node warmstarts due to deadlock condition during RAID group rebuild 7.7.1.1
HU01109 S2 Multiple nodes can experience a lease expiry when a FC port is having communications issues 7.7.1.1
HU01140 S2 Easy Tier may unbalance the workloads on MDisks using specific Nearline SAS drives due to incorrect thresholds for their performance 7.7.1.1 EasyTier
HU01141 S2 Node warmstart, possibly due to a network problem, when a CLI mkippartnership is issued. This may lead to loss of the config node, requiring a T2 recovery 7.7.1.1 IP Replication
HU01180 S2 When creating a snapshot on an ESX host using VVols a T2 may occur 7.7.1.1 Hosts, VVols
HU01182 S2 Node warmstart due to 16Gb HBA firmware receiving an invalid SCSI TUR command 7.7.1.1
HU01184 S2 When removing multiple MDisks a T2 may occur 7.7.1.1
HU01185 S2 iSCSI target closes connection when there is a mismatch in sequence number 7.7.1.1 iSCSI
HU01189 S2 Improvement to DRAID dependency calculation when handling multiple drive failures 7.7.1.1 Distributed RAID
HU01221 S2 Node warmstarts due to an issue with the state machine transition in 16Gb HBA firmware 7.7.1.1
HU01250 S2 When using lsvdisklba to find a bad block on a compressed volume the vdisk can go offline 7.7.1.1 Compression
HU01516 S2 When node configuration data exceeds 8K in size some user defined settings may not be stored permanently resulting in node warmstarts 7.7.1.1 Reliability Availability Serviceability
IT16148 S2 When accelerate mode is enabled, due to the way promote/swap plans are prioritized over demote, Easy Tier is only demoting 1 extent every 5 minutes 7.7.1.1 EasyTier
IT16337 S2 Hardware offloading in 16G FC adapters has introduced a deadlock condition that causes many driver commands to time out leading to a node warmstart 7.7.1.1
FLASH-19414 S2 V9000 upgrade to 7.6.0.3 stalled at 64% 7.7.1.1 System Update
HU01050 S3 DRAID rebuild incorrectly reports event code 988300 7.7.1.1 Distributed RAID
HU01063 S3 3PAR controllers do not support OTUR commands resulting in device port exclusions 7.7.1.1 Backend Storage
HU01074 S3 An unresponsive testemail command (possible due to a congested network) may result in a single node warmstart 7.7.1.1
HU01143 S3 Where nodes are missing config files some services will be prevented from starting 7.7.1.1
HU01155 S3 When a lsvdisklba or lsmdisklba command is invoked, for an MDisk with a back end issue, a node warmstart may occur 7.7.1.1 Compression
HU01187 S3 Circumstances can arise where more than one array rebuild operation can share the same CPU core resulting in extended completion times 7.7.1.1
HU01194 S3 A single node warmstart may occur if CLI commands are received from the VASA provider in very rapid succession. This is caused by a deadlock condition which prevents the subsequent CLI command from completing 7.7.1.1 VVols
HU01198 S3 Running the Comprestimator svctask analyzevdiskbysystem command may cause the config node to warmstart 7.7.1.1 Comprestimator
HU01212 S3 GUI displays an incorrect timezone description for Moscow 7.7.1.1 Graphical User Interface
HU01214 S3 GUI and snap missing EasyTier heat map information 7.7.1.1 Support Data Collection
HU01219 S3 Single node warmstart due to an issue in the handling of ECC errors within 16G HBA firmware 7.7.1.1
HU01244 S3 When a node is transitioning from offline to online it is possible for excessive CPU time to be used on another node in the cluster which may lead to a single node warmstart 7.7.1.1
FLASH-18109 S3 Persistent 780 Battery Failed message 7.7.1.1
FLASH-19098 S3 V9000 AE2 systems show email_state is disabled after CCL 7.7.1.1 System Update
FLASH-19274 S3 lsdrive output shows incorrect firmware version after upgrade 7.7.1.1
FLASH-19273 S4 After flash module replacement and boot upgrade, lsdrive output shows incorrect firmware version 7.7.1.1 System Update

4. Supported upgrade paths

Please refer to the Concurrent Compatibility and Code Cross Reference for Spectrum Virtualize page for guidance when planning a system upgrade.

5. Useful Links

Description Link
Support Website IBM Knowledge Center
IBM FlashSystem Fix Central V9000
Updating the system IBM Knowledge Center
IBM Redbooks Redbooks
Contacts IBM Planetwide