PK36998: NATIVE HEAP OUTOFMEMORY

 Fixes are available

6.1.0.15 WebSphere Application Server V6.1 Fix Pack 15 for i5/OS
6.1.0.13 WebSphere Application Server V6.1 Fix Pack 13 for AIX
6.1.0.15 WebSphere Application Server V6.1 Fix Pack 15 for AIX
6.1.0.15: WebSphere Application Server V6.1 Fix Pack 15 for HP-UX
6.1.0.15: WebSphere Application Server V6.1 Fix Pack 15 for Windows
6.1.0.13: WebSphere Application Server V6.1 Fix Pack 13 for Windows
6.1.0.17 WebSphere Application Server V6.1 Fix Pack 17 for i5/OS
6.1.0.13: WebSphere Application Server V6.1 Fix Pack 13 for i5/OS
6.1.0.13: WebSphere Application Server V6.1 Fix Pack 13 for HP-UX
6.1.0.17: WebSphere Application Server V6.1 Fix Pack 17 for Linux
6.1.0.17: WebSphere Application Server V6.1 Fix Pack 17 for Solaris
6.1.0.17: WebSphere Application Server V6.1 Fix Pack 17 for HP-UX
6.1.0.17: WebSphere Application Server V6.1 Fix Pack 17 for Windows
6.1.0.17 WebSphere Application Server V6.1 Fix Pack 17 for AIX
6.1.0.13: WebSphere Application Server V6.1 Fix Pack 13 for Solaris
6.1.0.15: WebSphere Application Server V6.1 Fix Pack 15 for Linux
6.1.0.15: WebSphere Application Server V6.1 Fix Pack 15 for Solaris
6.1.0.9 WebSphere Application Server V6.1 Fix Pack 9 for AIX
6.1.0.9: WebSphere Application Server V6.1 Fix Pack 9 for i5/OS
6.1.0.9: WebSphere Application Server V6.1 Fix Pack 9 for HP-UX
6.1.0.9: WebSphere Application Server V6.1 Fix Pack 9 for Linux
6.1.0.9: WebSphere Application Server V6.1 Fix Pack 9 for Solaris
6.1.0.9: WebSphere Application Server V6.1 Fix Pack 9 for Windows
6.1.0.11: WebSphere Application Server V6.1 Fix Pack 11 for HP-UX
6.1.0.11: WebSphere Application Server V6.1 Fix Pack 11 for Windows
6.1.0.11: WebSphere Application Server V6.1 Fix Pack 11 for Solaris
6.1.0.11: WebSphere Application Server V6.1 Fix Pack 11 for Linux
6.1.0.11: WebSphere Application Server V6.1 Fix Pack 11 for i5/OS
6.1.0.11 WebSphere Application Server V6.1 Fix Pack 11 for AIX
6.1.0.13: WebSphere Application Server V6.1 Fix Pack 13 for Linux
6.1.0.19 WebSphere Application Server V6.1 Fix Pack 19 for AIX
6.1.0.19: WebSphere Application Server V6.1 Fix Pack 19 for HP-UX
6.1.0.19 WebSphere Application Server V6.1 Fix Pack 19 for i5/OS
6.1.0.19: WebSphere Application Server V6.1 Fix Pack 19 for Linux
6.1.0.19: WebSphere Application Server V6.1 Fix Pack 19 for Solaris
6.1.0.19: WebSphere Application Server V6.1 Fix Pack 19 for Windows
Java SDK 1.5 SR8 Cumulative Fix for WebSphere Application Server



APAR status
Closed as program error.

Error description
Message
"CEE3250C TOKEN=00040CB2 61C3C5C5 00000000 WHILE RUNNING PROGRAM
BBOBOA"
in WebSphere Ccontroller Region when doing http POST with
large body (5 MB) after performing fifty or 60 of these in a
row.
Sometimes the following message appears in SYSPRINT:
"Description:
 memory unavailable
 amount of storage requested: 5000279"
CORBA::NO_MEMORY minor code: c9c2e44d
from filename: ./bbooxjni.cpp
   ... {some stack elements deleted} ...
at com.ibm.ws390.xmem.XMemCRCppUtilities.queueInboundRequest()
-
Normally there is also a CEEDUMP present in sysout.
--
This error is also encountered with smaller posts of 1 MB
bodies, but only if multiple posts from several clients at the
same time.
Local fix
If APAR 
PK31401 in service level 6.1.0.4 is installed,
activating HTTP Inbound message chunking will frequently avoid
this error if the chunking configuration variable
maxRequestMessageBodySize is either not specified (the default)
or kept at the default value of 32.
Problem summary
****************************************************************
* USERS AFFECTED: All users of WebSphere Application Server    *
*                 V6.1 for z/OS                                *
****************************************************************
* PROBLEM DESCRIPTION: When WebSphere Application Server for   *
*                      z/OS                                    *
*                      * handles multiple large messages, or   *
*                      * runs under heavy load, or             *
*                      * runs a workload that drives garbage   *
*                        collection infrequently,              *
*                      The server may fail with an out of      *
*                      memory condition in LE storage.  In     *
*                      this case WebSphere minor code C9C2E44D *
*                      may be issued.                          *
****************************************************************
* RECOMMENDATION:                                              *
****************************************************************
WebSphere relies on direct ByteBuffers from the JVM to handle
request data.  Direct ByteBuffers are allocated in LE heap as
opposed to the JVM heap.  However, even if those direct
ByteBuffers are dereferenced, the native LE storage is not
reclaimed by the JVM until a garbage collection cycle runs.
In cases when the server is handling large requests, or when
garbage collection cycles are infrequent, LE storage may be
exhausted before the JVM runs a garbage collection cycle thus
resulting in an abend in the controller.

WebSphere also caches an initial read buffer for each
connection.  When a connection is closed this read buffer is
reused in order to avoid a memory allocation.  This works well
for non-persistent connections (1 request per connection)
However, for highly persistent connections, this implies that
the buffer is held and unused for a considerable amount of time.

For workloads that require a large number of connected clients,
this can lead to a shortage of LE heap.

In addition, the maximum configurable number of connections on
a particular channel framework based listening port is 20,000
which is restrictive for some workloads.
Problem conclusion
The WebSphere runtime now provides a configuration option that
will enable more proactive management of direct ByteBuffer
storage such that freeing LE heap will no longer be tied to a
garage collection cycle.  Rather, storage for individual direct
ByteBuffers will be freed when the object is no longer needed.

A new configuration option is also being provided to reduce the
footprint of connections in WebSphere by freeing the initial
read buffer when it is no longer required by the connection.
This option will only be honored if proactive management of
direct ByteBuffers is also enabled.
Finally, the WebSphere runtime will be updated to allow a
maximum of 128,000 connections per channel framework based
listener port.

APAR PK36998 requires changes to documentation.
NOTE: Periodically, we refresh the documentation on our
Web site, so the changes might have been made before you
read this text. To access the latest on-line
documentation, go to the product library page at:


http://www.ibm.com/software/webservers/appserv/library

The following changes to the z/OS version of the WebSphere
Application Server Version 6.1.x Information Center will be
made available.
The following step will be added to the "Fine tuning the LE
heap" topic:

9. If your Language Environment (LE) storage is being exhausted
   before the Java Virtual Machine (JVM) runs a garbage
   collection cycle, you can use a JVM generic argument to
   indicate that storage for individual direct byte buffers
   should be released as soon as the buffer is no longer
   needed.

   The direct byte buffers, that the JVM creates to handle
   request data, are allocated in the LE heap instead of in the
   JVM heap. However, even if the direct byte buffers are no
   longer needed, the JVM does not release this native LE
   storage until the next garbage collection occurs. If the
   server is handling large requests, LE storage might become
   exhausted before the JVM runs a garbage collection cycle,
   causing the server to abnormally terminate (abend). LE
   storage also can become exhausted if the server is
   experiencing infrequent garbage collection cycles.

   To indicate that storage for individual direct byte buffers
   should be released as soon as the buffer is no longer needed:

   1. In the administrative console, click Application servers
      > server > Java and Process Management > Process
      Definition.
   2. Select Control, and then click Java Virtual Machine.
   3. Enter the following value in the Generic JVM arguments
      field.

   -Dcom.ibm.ws.buffermgmt.impl.WsByteBufferPoolManagerImpl=
     com.ibm.ws.buffermgmt.impl.ZOSWsByteBufferPoolManagerImpl

   4. Repeat steps 2-3, selecting Servant instead
      of Control.
   5. Repeat steps 2-3, selecting Adjunct if one exists
   6. Repeat steps 2 and 3 for the deployment manager and node
      agents.
   7. The new direct ByteBuffer manager will be activated in
      each WebSphere server after it has been recycled.

The description of the Maximum open connections custom
property, that is contained in the "TCP transport channel
settings" topic, will be changed to the following:

   Maximum open connections

   Specifies the maximum number of connections that can be
   open at one time.

   Data type     Integer between 1 and 128,000 inclusive
   Default     20,000

The description of the Generic JVM arguments field, that is
contained in the "Java virtual machine settings" topic, will
be updated to include the following description of the new
-Dcom.ibm.ws.buffermgmt.impl.WsByteBufferPoolManagerImpl=
command line argument:

  You can use the
  -Dcom.ibm.ws.buffermgmt.impl.WsByteBufferPoolManagerImpl=
  argument to indicate that storage for individual direct byte
  buffers should be released as soon as the buffer is no
  longer needed. The only supported value for this argument is
  -Dcom.ibm.ws.buffermgmt.impl.WsByteBufferPoolManagerImpl=
    com.ibm.ws.buffermgmt.impl.ZOSWsByteBufferPoolManagerImpl

  The direct byte buffers, that the JVM creates to handle
  request data, are allocated in the LE heap instead of in the
  JVM heap.  Normally, even if the direct byte buffers are no
  longer needed, the JVM does not release this native LE
  storage until the next garbage collection occurs.  If the
  server is handling large requests, LE storage might become
  exhausted before the JVM runs a garbage collection cycle,
  causing the server to abnormally terminate (abend). LE storage
  also can become exhausted if the server is experiencing
  infrequent garbage collection cycles.
  Configuring the JVM with the
  -Dcom.ibm.ws.buffermgmt.impl.WsByteBufferPoolManagerImpl=
    com.ibm.ws.buffermgmt.impl.ZOSWsByteBufferPoolManagerImpl
  argument prevents these abends from occurring.
  You also need to specify this argument if you are using the
  zaioFreeInitialBuffers custom property for a TCP channel to
  indicate that the channel should release the initial read
  buffers used on new connections as soon as these buffers
  are no longer needed for the connection.

The topic "TCP transport channel custom properties" will be
updated to include the following description of the new
zaioFreeInitialBuffers custom property:

  zaioFreeInitialBuffers

  Use the zaioFreeInitialBuffers property to indicate that
  the TCP channel should release the initial read buffers used
  on new connections as soon as these buffers are no longer
  needed for the connection.  By default, this initial read
  buffer is cached for each connection.  When a connection is
  closed, the read buffer is reused to avoid a memory
  allocation.  This process works well for non-persistent
  connections, where there is one request per connection.
  However, for highly persistent connections, the buffer might
  be held for a considerable amount of time even though it is
  not being used.  For workloads that require a large number
  of connected clients, this situation can cause a shortage of
  Language Environment (LE) heap space.  Unless your workload
  consists mainly of non-persistent connections, you should set
  this custom property to true to enable the release of the
  initial read buffers.

  If you set this property to true, you must also add the
  following argument to the JVM generic arguments that are
  configured for the application server that is using this TCP
   channel:

  -Dcom.ibm.ws.buffermgmt.impl.WsByteBufferPoolManagerImpl=
    com.ibm.ws.buffermgmt.impl.ZOSWsByteBufferPoolManagerImpl

   Data type   String
   Default     false

APAR PK36998 is currently targeted for inclusion in Service
Level (Fix Pack) 6.1.0.8 of WebSphere Application Server V6.1
for z/OS.
Temporary fix Comments
APAR information
APAR number PK36998
Reported component name WEBSPHERE FOR Z
Reported component ID 5655I3500
Reported release 610
Status CLOSED PER
PE NoPE
HIPER NoHIPER
Special Attention NoSpecatt
Submitted date 2007-01-07
Closed date 2007-04-09
Last modified date 2008-04-22

APAR is sysrouted FROM one or more of the following:

APAR is sysrouted TO one or more of the following:

Modules/Macros
CFW DBBM        

Publications Referenced

Fix information
Fixed component name WEBSPHERE FOR Z
Fixed component ID 5655I3500

Applicable component levels
R500 PSN    UP
R601 PSN    UP
R610 PSY UK24627    UP07/05/11 P F705

  Fix is available
Select the PTF appropriate for your component level. You will be required to sign in. Distribution on physical media is not available in all countries.


Document Information


Current web document: swg1PK36998.html
Product categories: Software > Application Servers > Distributed Application & Web Servers > WebSphere Application Server for z/OS
Operating system(s):
Software version: 610
Software edition:
Reference #: PK36998
IBM Group: Software Group
Modified date: Apr 22, 2008