Power10 System Firmware Fix History - Release levels ML10xx

Firmware Description and History

ML1020
For Impact, Severity and other Firmware definitions, Please refer to the below 'Glossary of firmware terms' url:
https://www.ibm.com/support/pages/node/6555136
ML1020_085_079 / FW1020.10

09/23/22
Impact: Availability    Severity:  SPE

New Features and Functions
  • Support was added to the eBMC ASMI "Resource management -> System parameters->Aggressive prefetch" for Prefetch settings to enable or disable an alternate configuration of the processor core/nest to favor more aggressive prefetching behavior for the cache.  "Aggressive prefetch" is disabled by default and a change to enable it must be done at service processor standby.  The default behavior of the system ("Aggressive prefetch" disabled) will not change in any way with this new feature.  The customer will need to power off and enable "Aggressive prefetch" to get the new behavior.  Only change the "Aggressive prefetch" value if instructed by support or if recommended by a solution vendor as it might cause degraded system performance.
  • DEFERRED:  Support was added to the eBMC ASMI "Resource management->System parameters" for an option to set a Frequency cap.  When enabled, the cap prevents all processors in the system from exceeding the specified maximum operating frequency (given in MHz).
  • Support was added for a processor socket power capping control to prevent a trip of the Voltage Regulator Module (VRM) and system shutdown when running HPC workloads with Matrix Math Accelerator (MMA) enabled.  The On-Chip Controller (OCC) socket Vdd power cap control loop monitors Vdd power based on APSS readings to keep Vdd power from reaching the VRM slow trip limit.   The OCC will adjust the processor frequency to reduce power as needed, but will restore the frequency to normal when it is safe to do so.
  • Support was adding for parsing On-Chip Controller (OCC) BC8A2Axx SRC information for the eBMC ASMI Event logs.
  • Support was added to the eBMC ASMI for a search option for the assemblies section on the inventory page.
System firmware changes that affect all systems
  • DEFERRED: A problem was fixed to clear the "deconfigured by error ID" property for a re-enabled Field Core Override (FCO) core that is fully functional and being used by the system.  This can happen If the system boots to runtime with FCO enabled such that 1 or more cores were disabled to achieve the FCO cap, and then one of those enabled cores is guarded at runtime, then on a subsequent memory preserving IPL ( MPIPL), a different core (disabled on previous boot), may be brought back online to hit the FCO number. But it will have the "deconfigured by error ID" property set to indicate it is still deconfigured by FCO.
  • DEFERRED: A problem was fixed for the eBMC ASMI "PCIe Hardware topology" information not being updated when a PCIe expansion drawer firmware update occurs or a type/model/serial number change is done.  The location codes for the PCIe expansion drawer FRUs and/or PCIe expansion drawer firmware version may not be correct.  The problem occurs when a PCIe expansion drawer change is done more than once to a given drawer but only the first change is shown.
  • DEFERRED: A problem was fixed for a PCIe switch being recovered instead of a port for a port error.  Since the switch is getting recovered instead of the port, all the other adapters under the switch are reset for the recovery action (and have a functional loss for a brief moment), instead of the lone adapter associated with the port.  Any downstream port level errors under the switch can trigger switch reset instead of port level reset.  After switch recovery, all the adapters under the switch will be operational.
  • A problem was fixed for a cable card port identify indicator that will not correctly display or modify from an OS following a concurrent cable card repair operation.  As a workaround, the cable card port identify can be done from the HMC or the eBMC ASMI. 
  • A problem was fixed for a concurrent exchange of a PCIe expansion drawer Midplane with PCIe expansion drawer slots owned by an active partition that fails at the Set Service Lock step.  This fails every time the concurrent exchange is attempted.
  • A problem was fixed for a rare system hang that can happen any time Dynamic Platform Optimization (DPO), memory guard recovery, or memory mirroring defragmentation occurs for a dedicated processor partition running in Power9 or Power10 processor compatibility mode. This does not affect partitions in Power9_base or older processor compatibility modes. If the partition has the "Processor Sharing" setting set to "Always Allow" or "Allow when partition is active", it may be more likely to encounter this than if the setting is set to "Never allow" or "Allow when partition is inactive".
    This problem can be avoided by not using DPO or using Power9_base processor compatibility mode for dedicated processor partitions. This can also be avoided by changing all dedicated processor partitions to use shared processors.
  • A problem was fixed for a partition with VPMEM failing to activate after a system IPL with SRC B2001230 logged for a "HypervisorDisallowsIPL" condition.  This problem is very rare and is triggered by the partition's hardware page table (HPT) being too big to fit into a contiguous space in memory.  As a workaround, the problem can be averted by reducing the memory needed for the HPT.  For example, if the system memory is mirrored, the HPT size is doubled, so turning off mirroring is one option to save space.  Or the size of the VPMEM LUN could be reduced.  The goal of these options would be to free up enough contiguous blocks of memory to fit the partition's HPT size.
  • A problem was fixed for an SR-IOV adapter in shared mode failing on an IPL with SRC B2006002 logged.  This is an infrequent error caused by a different SR-IOV adapter than expected being associated with the slot because of the same memory buffer being used by two SR-IOV adapters.  The failed SR-IOV adapter can be powered on again and it should boot correctly.
  • A problem was fixed for a processor core being incorrectly predictively deconfigured with SRC BC13E504 logged.  This is an infrequent error triggered by a cache line delete fail for the core with error log "Signature": "EQ_L2_FIR[0]: L2 Cache Read CE, Line Delete Failed".
  • A problem was fixed for the hypervisor to detect when it was missing Platform Descriptor Records (PDRs) from Hostboot and to log an SRC A7001159 for this condition.  The PDRs can be missing if the eBMC Platform Level Data Model (PLDM) failed and restarted during the IPL prior to the exchange of the PDRs with the Hypervisor.
    With the PDRs missing from the Hypervisor, the user would be unable to manage FRUs (such as LED control and slot concurrent maintenance).  A power off and power on of the system would recover from the problem.
  • A problem was fixed for register MMCRA bit 63 (Random Sampling Enable) being lost after a partition thread going into a power save state, causing performance tools that use the performance monitor facility to possibly collect incorrect data for an idle partition.
  • A problem was fixed for the SMS menu option "I/O Device Information".  When using a partition's SMS menu option "I/O Device Information" to list devices under a physical or virtual Fibre Channel adapter, the list may be missing or entries in the list may be confusing. If the list does not display, the following message is displayed:
    "No SAN adapters present.  Press any key to continue".
    An example of a confusing entry in a list follows:
    "Pathname: /vdevice/vfc-client@30000004
    WorldWidePortName: 0123456789012345
     1.  500173805d0c0110,0                 Unrecognized device type: c"
  • A problem was fixed for booting an OS using iSCSI from SMS menus that fails with a BA010013 information log.  This failure is intermittent and infrequent.  If the contents of the BA010013 are inspected, the following messages can be seen embedded within the log:
    " iscsi_read: getISCSIpacket returned ERROR"
    " updateSN: Old iSCSI Reply - target_tag, exp_tag"
  • A problem was fixed for an adapter port link not coming up after the port connection speed was set to "auto".  This can happen if the speed had been changed to a supported but invalid value for the adapter hardware prior to changing the speed to "auto".  A workaround to this problem is to disable and enable the switch port.
  • A problem was fixed for possible incorrect system fan speeds that can occur when an NVMe drive is pulled when the system is running.  This can occur if the pulled device is hot (over 58 C in temperature) or has a broken temperature sensor connection.  For these cases, the system fan control will either leave the fans running at high speed or keep increasing fans to the maximum speed.  If this problem occurs, it can be corrected by a reboot of the eBMC service processor.
  • A problem was fixed to remove an unneeded message "Power restore policy can not be changed while in manual operating mode" that occurs when viewing the eBMC ASMI "Power Restore Policy" in normal mode.  This message should only be shown when in manual operating mode.
  • A problem was fixed for timestamps for eBMC sensor values showing the wrong time and day when viewed by telemetry reports such as Redfish "MetricReport".  The timestamp can be converted to actual time and day by adding an epoch offset of 1970-1-1 to the timestamp value.
  • A problem was fixed for an empty NVMe slot reporting as an "Unrecognized FRU" but functional on the OS.
  • A problem was fixed for the eBMC ASMI PCIe Topology page showing the width of empty slots as "-1".  With the fix, the width of an empty slot displays as "unknown".
  • A problem was fixed for a false error message "Error resetting link" from the eBMC ASMI PCIe Topology page when setting an Identify LED for a PCIe slot.  The LED functions correctly for the operation but an error message is observed.
  • A problem was fixed for the eBMC ASMI "Operations->Host console" to show the correct connection status.  The status was not being updated as needed so it could show "Disconnected" even though the connection was active.
  • A problem was fixed on the eBMC ASMI "Operations->Firmware" page to prevent an early task completed message when switching running and backup images.  The early completion message does not cause an error in switching the firmware levels.
  • A problem was fixed on the eBMC ASMI "Resource management -> Memory -> System memory page setup" to prevent an invalid large value from being specified for "Requested huge page memory".  Without the fix, the out of range value higher than the maximum is accepted which can cause errors when allocating the memory for the partitions.
  • A problem was fixed on the eBMC ASMI Overview page to show the correct status of disabled for a Service Account that has been disabled. The User Management page, however, shows the correct status for Service Account and it is disabled in the eBMC.  This happens every time a Service Account is disabled.
  • A problem was fixed on the eBMC ASMI Overview page for the Server information "Asset tag" to show the correct updated "Asset tag" value after doing an edit of the tag and then a refresh of the page.  Without the fix, the old value is shown even though the change was successful.
  • A problem was fixed on the eBMC ASMI Overview->Firmware page where the Update firmware "Manage access keys" link is incorrectly disabled when the system is powered on.  This prevents the user from accessing the Capacity on demand (COD) page.  This traversal path works if the system is powered off.  The Firmware page is reached from the Overview page by going to the Firmware information frame and clicking on "View More".  Alternatively, the COD page can be reached using the side navigation bar with the "Resource management ->Capacity on demand" link as this works for the case where the system is powered on.
  • A problem was fixed for the eBMC ASMI "Settings->Power restore policy" to make it default to "Last state".  The current default is "Always off".  If power is lost to the system, it can be manually powered back on.  Or the user can configure the Power restore policy" to the desired value.
  • A problem was fixed for the eBMC ASMI Deconfiguration records not having the associated event log ID (PEL ID) that caused the deconfiguration of the hardware.  This occurs anything hardware is deconfigured and an ASMI Deconfiguration record is created.
  • A problem was fixed for the eBMC ASMI PCIe Topology page not having the NVME adapter/slot listed correctly.  As a workaround, the PCIe Topology information can be read from the HMC PCIe Topology view to get the NVME adapter/slot.
  • A problem was fixed for a short loss or dip in input power to a power supply causing SRC 110015F1 to be logged with message "The power supply detected a fault condition, see AdditionalData for further details."  The running system is not affected by this error.  This Unrecoverable Error (UE) SRC should not be logged for a very short power outage.   Ignore the error log if all power supplies have recovered.
  • A problem was fixed for an 110000AC SRC being logged for a false brownout condition after a faulted power supply is removed.  This problem occurs if the eBMC incorrectly categorizes the number of power supplies present, missing, and faulted to determine whether a brownout has occurred.  The System Attention LED may be lit if this problem occurs and it can be turned off using the HMC.
  • A problem was fixed for an eBMC dump being generated during a side switch IPL.  The side switch IPL is successful and no error log is reported.  This occurs on every side switch IPL.  For this situation, the eBMC dump can be ignored.
  • A problem was fixed for the eBMC falsely detecting an incorrect number of On-Chip Controllers (OCCs) during an IPL with SRC BD8D2681 logged.  This is a random and infrequent error on an IPL that recovers automatically with no impact to the system.
  • A problem was fixed for eBMC ASMI Hardware deconfiguration records for DIMM and Core hardware being incorrectly displayed after a Factory reset "Reset server settings only".  The deconfiguration records existing prior to this type of Factory reset will be displayed in ASMI after the factory reset but they are actually cleared in the system. A full factory reset using Factory reset "Reset BMC and server settings" does clear any existing deconfiguration records from ASMI.
  • A problem was fixed for eBMC ASMI failing to set a static IP address when switching from DHCP to static IP in the eBMC network configuration.  This occurs if the static IP selected is the same as the one that was used by DHCP.  This problem can be averted by disabling DHCP prior to assigning the static IP address.
  • A problem was fixed for the eBMC ASMI "Settings->Power restore policy" of  "Last state" where the system failed to power back on after an AC outage.  This can happen if the last IPL to the host run time state was a reboot by hostboot firmware for an SBE update, or if the last IPL was a warm reboot.
  • A problem was fixed for the eBMC ASMI Real time indicators for special characters being displayed that should have been suppressed.  This problem is intermittent but fairly frequent.  The special characters can be ignored.
  • A problem was fixed for the eBMC ASMI "Operations->System power operations-> Server power policy" of Automatic to correct the text describing this feature.  It was changed from "System automatic power off" to " With this setting, when the system is not partitioned, the behavior is the same as 'Power off', and when the system is partitioned, the behavior of the system is the same as 'Stay on'".
  • A problem was fixed for the eBMC ASMI "Hardware status->PCIe Hardware topology" PCIe link type field which had some PCIe adapter slots showing as primary when they should be secondary.  The PCIe adapter switch slots are secondary buses, so these should be displayed as "Secondary" on the Link properties type.
  • A performance problem was fixed for the eBMC ASMI "Hardware status->PCIe Hardware topology" page to reduce the amount of time the page takes to load.  The fix reduces internal calls by half for the loading process for each PCIe adapter in the system, so the improvement time is more for the larger systems.
  • A problem was fixed for the eBMC ASMI "Hardware status->PCIe Hardware topology" page for missing information for the NVMe drive associated with an NVMe slot.  The drive in the slot is required to populate attributes like link speed, but these are empty when the problem causes the drive to not be found.  This is an ASMI display problem only for the PCIe topology screen as the NVMe drive is functional in the system.
  • A problem was fixed for a request to generate a resource dump that has missing parameters causing an eBMC bmcweb core dump.
  • A problem was fixed for extra logging of SRC BD56100A if the LCD panel is unplugged during an IPL.  The LCD support install and remove while the system is running, so any SRCs logged for this should be minimal, but there were many when this was done during the IPL.
  • A problem was fixed for the eBMC ASMI "Hardware status->PCIe Hardware topology" page not updating the link status to "Unknown" or "Failed" when it has failed for a PCIe adapter.  The link continues to show as operational.  The HMC PCIe Topology view can be used to show the correct status of the link.
  • A problem was fixed for an eBMC SRC BD602803 not referencing a temperature issue as a cause for the SRC.  There is a missing message and callout for an over temperature fault.  With the fix, the OVERTMP symbolic FRU is called out for the parent FRU of the temperature sensor.
  • A problem was fixed for an eBMC dump created on a hot plug or unplug of an NVMe drive.  The dump should not be created for this situation and can be ignored or deleted.
  • A problem was fixed for the eBMC ASMI "Deconfiguration records" page option to "download additional data" that creates a file in a non-human readable format.  A workaround for the problem would be to go to the eBMC ASMI "Event logs" page using the SRC code that caused the hardware to be deconfigured and then download the event log details from there.
  • A problem was fixed for being able to control the LEDs on the CXP ports of the cable cards.  This can affect concurrent maintenance as well as alerting for faults.
  • A problem was fixed for recovery from USB firmware update failures.  A failure in the USB update was causing an incomplete second try where an eBMC reboot was needed to ensure the code update retry worked properly.
System firmware changes that affect certain systems
  • DEFERRED: On systems with AIX or Linux partitions, a problem was fixed for certain I/O slots that have an incorrect description in the output from the lspci and lsslot commands in AIX and Linux operating systems.  This occurs anytime one of the affected slots is assigned to an AIX or Linux partition.  
    The following slots are affected:
      "Combination slots" (those that are PCI gen 4 x16 connector with x16 lanes connected OR PCI gen5 x16 connector with 8 lanes connected).
     P0-C0
     P0-C8
     P0-C4
     P0-C10
  • For HMC managed systems, a problem was fixed for read-only fields on the eBMC ASMI Memory Resource Management page (Logical Memory block size, System Memory size, I/O adapter enlarged capacity, and Active Memory Mirroring) being editable in the gui when the system is powered off.  Any changes made in this manner would not be synchronized to the HMC(so the system would still use the HMC settings). To correct this problem, the Memory page settings should be changed on the HMC.
  • For a system that is managed by an HMC, a problem was fixed for the eBMC ASMI "Operations->Server power operations" page showing AIX/Linux partition boot mode and IBM i partition boot options which are not applicable to a HMC managed system.
ML1020_079_079 / FW1020.00

07/22/22
Impact: NEW    Severity:  NEW

GA Level with key features listed below

New Features and Functions

  • This server firmware includes the SR-IOV adapter firmware level xx.32.1010 for the following Feature Codes and CCINs: #EC2R/EC2S with CCIN 58FA; #EC2T/EC2U with CCIN 58FB; and #EC66/EC67 with CCIN 2CF3.
  • Support for the new eBMC service processor that replaces the FSP service processor used on other Power systems.
  • Support for VIOS 3.1.3 (based on AIX 7.2 TL5 (AIX 72X) on POWER10 servers.
  • Support was added for a BMC ASMI " Operations->Resource management -> Lateral cast out control" option to disable or enable the system Lateral Cast-Out function (LCO).  LCO is enabled by default and a change to disable it must be done at service processor standby.  POWER processor chips since POWER7 have a feature called ?Lateral Cast-Out? (LCO), enabled by default, where the contents of data cast out of one core?s L3 can be written into another core?s L3.  Then if a core has a cache miss on its own L3, it can often find the needed data block in another local core?s L3. This has the useful effect of slightly increasing the length of time that a storage block gets to stay in a chip?s cache, providing a performance boost for most applications.  However, for some applications such as SAP HANA, the performance can be better if LCO is disabled.  More information on how LCO is being configured by SAP HANA can be found in the SAP HANA on Power Advanced Operation Guide manual that can be accessed using the following link: 
    http://ibm.biz/sap-linux-power-library
    Follow the "SAP HANA Operation" link on this page to the "SAP HANA Operation Guides" folder.  In this folder, locate the updated "SAP_HANA_on_Power_Advanced_Operation_Guide" manual that has a new topic added of "Manage IBM Power Lateral Cast Out settings" which provides the additional information.
    The default behavior of the system (LCO enabled) will not change in any way by this new feature.  The customer will need to power off and disable LCO in ASMI to get the new behavior.
  • Support was added for Secure Boot for SUSE Linux Enterprise Server (SLES) partitions.  The SUSE Linux level must be SLES 15 SP4 or later.  Without this feature, partitions with SLES 15 SP4 or later and which have the OS Secure Boot partition property set to "Enabled and Enforced" will fail to boot.  A workaround to this is to change the partition's Secure Boot setting in the HMC partition configuration to "Disabled" or "Enabled and Log only".
  • HIPER/Pervasive: For systems with Power Linux partitions, support was added for a new Linux secure boot key.  The support for the new secure boot key for Linux partitions may cause secure boot for Linux to fail if the Linux OS for SUSE or RHEL distributions does not have a secure boot key update. 
    The affected Linux distributions are as follows that need the Linux fix level that includes "Key for secure boot signing grub2 builds ppc64le".
    1) SLES 15 SP4 - The GA for this Linux level includes the secure boot fix.
    2) RHEL 8.5- This Linux level has no fix.  The user must update to RHEL: 8.6 or RHEL 9.0.
    3) RHEL 8.6
    4) RHEL 9.0. 
    The update to a Linux level that supports the new secure boot key also addresses the following security issues in Linux GRUB2 and are the reasons that the change in secure boot key is needed as documented in the following six CVEs:
    1) CVE-2021-3695
    2) CVE-2022-28733
    3) CVE-2022-28734
    4) CVE-2022-28735
    5) CVE-2022-28736
    6) CVE-2022-28737
    Please note that when this firmware level of FW1020.00 is installed, any Linux OS not updated to a secure boot fix level will fail to secure boot.  And any Linux OS partition updated to a fix level for secure boot requires a minimum firmware level of FW1010.30 or later, or FW1020.00 or later to be able to do a secure boot.  If lesser firmware levels are active but the Linux fix levels for secure boot are loaded for the Linux partition, the secure boot failure that occurs will have BA540010 logged.  If secure boot verification is enabled, but not enforced (log only mode), then the fixed Linux partition will boot, but a BA540020 informational error will be logged.
  • Support for Active Memory Mirroring (AMM) for the PowerVM hypervisor.  This is an option that mirrors the main memory used by the firmware. With this option, an uncorrectable error resulting from failure of main memory used by system firmware will not cause a system-wide outage. This option efficiently guards against system-wide outages due to any such uncorrectable error associated with firmware. With this option, uncorrectable errors in data owned by a partition or application will be handled by the existing Special Uncorrectable Error Handling methods in the hardware, firmware, and OS.  This is a separately priced option that is ordered with feature code #EM8G and is defaulted to off.
  • Support for humidity sensor on the operator panel.
  • Support has been dropped for Active Memory Sharing (AMS) on POWER10 servers
  • Support has been dropped for the smaller logical-memory block (LMB) sizes of 16MB, 32MB, and 64MB. 128MB and 256MB are the only LMB sizes that can be selected in the BMC ASMI
  • System fan speed control was enhanced to support the reading of I/O processor temperatures by the On-Chip Controller (OCC) and passing it to the BMC for fan control.  Monitoring the IO temperatures in addition to processor core temperatures allows the system to increase fan speeds accordingly based on chip requirements.
  • Support was added for a new service processor command that can be used to 'lock' the power management mode, such that the mode can not be changed except by doing a factory reset.
  • Support for firmware update of the physical Trusted Platform Module (pTPM) from the PowerVM hypervisor.
  • Support for PowerVM enablement of Virtual Trusted Platform Module (vTPM) 2.0.
  • Support for Remote restart for vTPM 2.0 enabled partitions.  Remote restart is not supported for vTPM 1.2 enabled partitions.
  • TPM firmware upgraded to Nuvoton 7.2.3.0.  This allows Live Partition Mobility (LPM) migrations from systems running FW920/FW930 and older service pack levels of FW940/FW950 to FW1010.10 and later levels, and FW1020.00 and later.
  • Support vNIC and Hybrid Network Virtualization (HNV) system configurations in Live Partition Mobility (LPM) migrations to and from FW1020 systems.
  • Support for Live Partition Mobility (LPM) to allow LPM migrations when virtual optical devices are configured for a source partition.  LPM automatically removes virtual optical devices as part of the LPM process.  Without this enhancement, LPM is blocked if virtual optical devices are configured.
  • Support for Live Partition Mobility (LPM) to select the fastest network connection for data transfer between Mover Service Partitions (MSPs).  The configured network capacity of the adapters is used as the metric to determine what may provide the fastest connection  The MSP is the term used to designate the Virtual I/O Server that is chosen to transmit the partition?s memory contents between source and target servers.
  • Support for PowerVM for an AIX Update Access Key (UAK) for AIX 7.2.  Interfaces are provided that validate the OS image date against the AIX UAK expiration date.  Informational messages are generated when the release date for the AIX operating system has passed the expiration date of the AIX UAK during normal operation. Additionally, the server periodically checks and informs the administrator about AIX UAKs that are about to expire, AIX UAKs that have expired, or AIX UAKs that are missing. It is recommended that you replace the AIX UAK within 30 days prior to expiration.
    For more information, please refer to the Q&A document for "Management of AIX Update Access Keys" at
    https://www.ibm.com/support/pages/node/6480845.
  • Support for LPAR Radix PageTable mode in PowerVM.
  • Support for PowerVM encrypted NVRAM that enables encryption of all partition NVRAM data and partition configuration information.
  • Added information to #EXM0 PCIe3 Expansion Drawer error logs that will be helpful when analyzing problems.
  • Support to add OMI Connected Memory Buffer Chip (OCMB ) related information into the HOSTBOOT and HW system dumps.
  • Support for a PCIe4 x16 to CXP Converter card for the attachment of two active optical cables (AOC) to be used for external storage and PCIe fan-out attachment to the PCIe expansion drawers.  This cable card has Feature Code #EJ24 with CCIN 6B53 and Feature code #EJ2A. 
    #EJ24 pertains only to models S1022 (9105-22A) , S1022S (9105-22B), and L1022  (9786-22H).
    #EJ2A pertains only to models S1014(9105-41B), S1024(9105-42A), and L1024(9786-42H).
  • Support for the IBM 4769 PCIe3 Cryptographic Coprocessor hardware security module (HSM).  This HSM has Feature Code #EJ37 with CCIN C0AF.  Its predecessors are the IBM 4768, IBM 4767, and IBM 4765
  • Support for booting IBM i from a PCIe4 LP 32Gb 2-port Optical Fibre Channel Adapter with Feature Code #EN1K.  This pertains only to models S1022 (9105-22A), S1022S (9105-22B), and L1022  (9786-22H).
  • Support for new PCIe 4.0 x8 dual-port 32 Gb optical Fibre Channel (FC) short form adapter based on the Marvell QLE2772 PCIe host bus adapter (6.6 inches x 2.731 inches). The adapter provides two ports of 32 Gb FC capability using SR optics. Each port can provide up to 6,400 MBps bandwidth. This adapter has feature codes #EN1J/#EN1K with CCIN 579C. 
  • Support for new PCIe 3.0 16 Gb quad-port optical Fibre Channel (FC)l x8 short form adapter based on the Marvell QLE2694L PCIe host bus adapter (6.6 inches x 2.371 inches). The adapter provides four ports of 16 Gb FC capability using SR optics. Each port can provide up to 3,200 MBps bandwidth. This adapter has feature codes #EN1E/#EN1F with CCIN 579A.
  • Support for the 800 GB SSD PCIe4 NVMe U.2 module for IBM i with feature code #ES3A and CCIN 5B53.   Feature #ES3A indicates usage by IBM i in which the SSD is formatted in 4160 byte sectors and only pertains to models S1014(9105-41B), S1024(9105-42A), and L1024(9786-42H).
  • Support for the 1.6 TB SSD PCIe4 NVMe U.2 module for AIX/Linux and IBM i with feature codes #ES3B/#ES3C and CCIN 5B52.    Feature #ES3B indicates usage by AIX, Linux or VIOS in which the SSD is formatted in 4096 byte sectors. Feature #ES3C indicates usage by IBM i in which the SSD is formatted in 4160 byte sectors and only pertains to models S1014(9105-41B), S1024(9105-42A), and L1024(9786-42H).
  • Support for the 3.2 TB SSD PCIe4 NVMe U.2 module for AIX/Linux and IBM i with feature codes #ES3D/#ES3E and CCIN 5B51.    Feature #ES3D indicates usage by AIX, Linux or VIOS in which the SSD is formatted in 4096 byte sectors. Feature #ES3E indicates usage by IBM i in which the SSD is formatted in 4160 byte sectors and only pertains to models S1014(9105-41B), S1024(9105-42A), and L1024(9786-42H).
  • Support for the 6.4 TB SSD PCIe4 NVMe U.2 module for AIX/Linux and IBM i with feature codes #ES3F/#ES3G and CCIN 5B50.    Feature #ES3F indicates usage by AIX, Linux or VIOS in which the SSD is formatted in 4096 byte sectors. Feature #ES3G indicates usage by IBM i in which the SSD is formatted in 4160 byte sectors and only pertains to models S1014(9105-41B), S1024(9105-42A), and L1024(9786-42H).
  • Support for the 931GB SAS 4k 2.5 inch SFF-2 SSD for AIX/Linux and IBM i with feature codes #ESMB/#ESMD and CCIN 5B29.    Feature #ESMB indicates usage by AIX, Linux, or VIOS.   Feature #ESMD indicates usage by IBM i and only pertains to models S1014(9105-41B), S1024(9105-42A), and L1024(9786-42H).
  • Support for the 1.86 TB SAS 4k 2.5 inch SFF-2 SSD for AIX/Linux and IBM i with feature codes #ESMF/#ESMH and CCIN 5B21.    Feature #ESMB indicates usage by AIX, Linux, or VIOS.   Feature #ESMH indicates usage by IBM i and only pertains to models S1014(9105-41B), S1024(9105-42A), and L1024(9786-42H).
  • Support for the 3.72 TB SAS 4k 2.5 inch SFF-2 SSD for AIX/Linux and IBM i with feature codes #ESMK/#ESMS and CCIN 5B2D.    Feature #ESMK indicates usage by AIX, Linux, or VIOS.   Feature #ESMS indicates usage by IBM i and only pertains to models S1014(9105-41B), S1024(9105-42A), and L1024(9786-42H).
  • Support for the 7.44 TB SAS 4k 2.5 inch SFF-2 SSD for AIX/Linux and IBM i with feature codes #ESMV/#ESMX and CCIN 5B2F.    Feature #ESMV indicates usage by AIX, Linux, or VIOS.   Feature #ESMX indicates usage by IBM i and only pertains to models S1014(9105-41B), S1024(9105-42A), and L1024(9786-42H).
  • Support for the 387GB SAS SFF-2 SSD formatted with 5xx (528) byte sectors for AIX/Linux with feature code #ETK1 and CCIN 5B16.  Feature #ETK1 indicates usage by AIX, Linux, or VIOS.
  • Support for the 775GB SAS SFF-2 SSD formatted with 5xx (528) byte sectors for AIX/Linux with feature code #ETK3 and CCIN 5B17.  Feature #ETK3 indicates usage by AIX, Linux, or VIOS.
  • Support for the 387GB SAS SFF-2 SSD formatted with 4k (4224) byte sectors for AIX/Linux and IBM i with feature codes #ETK8/#ETK9 and CCIN 5B10.    Feature #ETK8 indicates usage by AIX, Linux, or VIOS.  Feature #ETK9 indicates usage by IBM i and only pertains to models S1014(9105-41B), S1024(9105-42A), and L1024(9786-42H).
  • Support for the 775GB SAS SFF-2 SSD formatted with 4k (4224) byte sectors for AIX/Linux and IBM i with feature codes #ETKC/#ETKD and CCIN 5B11.    Feature #ETKC indicates usage by AIX, Linux, or VIOS.   Feature #ETKD indicates usage by IBM i and only pertains to models S1014(9105-41B), S1024(9105-42A), and L1024(9786-42H).
  • Support for the 1.55TB SAS  SFF-2 SSD formatted with 4k (4224) byte sectors for AIX/Linux and IBM i with feature codes #ETKG/#ETKH and CCIN 5B12.    Feature #ETKG indicates usage by AIX, Linux, or VIOS.   Feature #ETK9H indicates usage by IBM i and only pertains to models S1014(9105-41B), S1024(9105-42A), and L1024(9786-42H).
  • Support for a mainstream 800GB NVME U.2 15 mm SSD (Solid State Drive) PCIe4 drive for AIX/Linux with Feature Code #EC7T and CCIN 59B7.   Feature #EC7T indicates usage by AIX, Linux, or VIOS in which the SSD is formatted in 4096 byte sectors.