|
Problem |
An Application Server can record totally out of heap
space or insufficient heap space to satisfy allocation request
when the Java™ virtual machine (JVM™) is unable to allocate a block
contiguous space on the heap to allocate an object. This might be due to
the heap being fragmented.
Use this technote as a guide to verify that your WebSphere® configuration
is optimized to reduce the likelihood of experiencing this problem.
Note: If the WebSphere ORB detects a java.lang.OutOfMemoryError,
it reflects this as a CORBA_NO_MEMORY return code. |
|
Cause |
A fragmented heap is caused by scattered, pinned or dosed
objects that reduce the availability of contiguous heap space, because
they cannot be moved during the compaction phase of garbage collection. A
fragmented heap can lead to a java.lang.OutOfMemoryError when a Java
application attempts to allocate a large Java object from the heap space.
Detecting a java.lang.OutOfMemoryError due to heap
fragmentation
Allocation failure for a fairly large object when there appears to be
enough space available in the heap to fulfill the request.
In the example, below, a 14 MB object could not be allocated from a
heap with 372 MB of free space.
<AF[3567]: Allocation Failure. need 14000072 bytes, 429 ms since last
AF>
<AF[3567]: managing allocation failure, action=2
(317646400/671021568)>
The allocation failure ends with a totally out of heap space
message. Note that the GC cycle completed and 55% of the heap free and 372
MB of free memory was in the heap, and the totally out of heap
space still occurred.
<GC(3723): GC cycle started Wed Jul 30 07:56:05 2003
<GC(3723): freed 54564152 bytes, 55% free (372210552/671021568), in
5731 ms>
<GC(3723): mark: 2085 ms, sweep: 62 ms, compact: 3584 ms>
<GC(3723): refs: soft 0 (age >= 32), weak 0, final 18, phantom
0>
<GC(3723): moved 1487221 objects, 82230368 bytes, reason=1, used 3216
more bytes>
<AF[3567]: managing allocation failure, action=3
(372210552/671021568)>
<AF[3567]: managing allocation failure, action=4
(372210552/671021568)>
<AF[3567]: clearing all remaining soft refs>
<GC(3724): GC cycle started Wed Jul 30 07:56:07 2003
<GC(3724): freed 73096 bytes, 55% free (372283648/671021568), in 1984
ms>
<GC(3724): mark: 1901 ms, sweep: 83 ms, compact: 0 ms>
<GC(3724): refs: soft 29 (age >= 32), weak 0, final 5, phantom
0>
<GC(3725): GC cycle started Wed Jul 30 07:56:12 2003
<GC(3725): freed 13776 bytes, 55% free (372297424/671021568), in 5340
ms>
<GC(3725): mark: 1867 ms, sweep: 60 ms, compact: 3413 ms>
<GC(3725): refs: soft 0 (age >= 32), weak 0, final 0, phantom 0>
<GC(3725): moved 667805 objects, 35951056 bytes, reason=1, used 160
more bytes>
<AF[3567]: managing allocation failure, action=6
(372297424/671021568)>
<AF[3567]: totally out of heap space>
<AF[3567]: completed in 13071 ms>
|
|
Solution |
There is only one way to properly address this problem. Change the
application so that it does not allocate large objects. The definition for
a large object varies depending on the heap size of the JVM™. As a rule,
objects over 500 k are too large.
Change the application so that it does not require allocation of
large objects
Use coding techniques to reduce the size of objects allocated; for
example, avoid making duplicate copies of large objects and pass them by
reference instead of by value.
To determine what objects might be fragmenting the heap, do the
following:
1. One set -Xtgc34 in the Command Line Arguments of the JVM to trace
the garbage collection to see what pinned objects are in the heap. For
information on the trace output from this option, see the Garbage
Collector diagnostics chapter in the JDK
Diagnostics Guide.
This is sample output of the -Xtgc34 trace. This trace shows what objects
are dosed and pinned in the heap. Some objects that are pinned are
threads; other objects that are pinned are objects passed on JNI calls. To
determine if there are objects from an application that are contributing
to the fragmentation, look for them in this trace.
<GC(VFY): pinned
java.lang.ref.Finalizer$FinalizerThread@102B1900/102B1908>
<GC(VFY): pinned
java.lang.ref.Reference$ReferenceHandler@102B1948/102B1950>
<GC(VFY): pinned java.lang.Thread@102B1990/102B1998>
<GC(VFY): pinned byte[][16384]>
<GC(VFY): dosed java.lang.OutOfMemoryError@102BD3D0/102BD3D8>
<GC(VFY): dosed java.lang.NoClassDefFoundError@102BD3E8/102BD3F0>
<GC(VFY): dosed java.lang.ref.ReferenceQueue$Lock@102BF4D0/102BF4D8>
<GC(VFY): dosed java.lang.ref.ReferenceQueue@102BF4E0/102BF4E8> |
2. If there are a number of threads created by an application in the
trace, consider using a thread pool for these instead of creating and
destroying a thread every time one is required.
3. If the Java application makes JNI calls, explicitly unpin JNI objects
to allow these objects to be compacted. (Objects used in a JNI call are
pinned in the heap, meaning they cannot be moved during the compaction
phase of garbage collection). Array objects passed on a JNI call are
pinned Refer to Lesson:
Interacting with Java from the Native Side for information on how to
deal with pinned objects in JNI methods. See the section entitled,
"Working With Java Arrays in Native Methods."
For AIX® libraries dealing with pinned objects, refer to this article for
additional information,
JNI
Programming on AIX
If the application code cannot be changed, there are a number of different
actions that you can take to potentially alleviate this problem. Perform
these steps in the following order:
- If you are using WebSphere V4 or V5, ensure that your Java
-fullversion has a build date later than 20031230. PQ82989 improves
garbage collection by backporting some 1.4.1 functionality to the 1.3.1
Java SDKs. All 1.3.1 Java SDKs that have a build date of 20031230
or later incorporate this improvement.
- Configure WebSphere so that the potential for the heap to be
fragmented is diminished.
- Tune the JDK to allow for the large object allocation.
1. Upgrade your SDK
2. Configure WebSphere to diminish the potential for heap
fragmentation
You can make the following configuration changes to maximize the
availability of contiguous heap space:
- Make sure that the minimum and maximum values for the thread pool
within the Web container are set to the same value. Also make sure that
the allow Growable option is set to false. This ensures that the
servlet engine transport threads are created upon startup of the
Application Server, and prevents the JVM from scattering these objects as
the heap is expanded. The entries in the server.xml file should
look like the following:
<threadPool xmi:id="ThreadPool_2" minimumSize="50"
maximumSize="50" inactivityTimeout="3500" isGrowable="false"/>
Note: If you do choose to have a variable-sized thread pool,
raise the Thread inactivity timeout from the default of 10 seconds to 120
seconds to increase the likelihood that a WebContainer thread is reused
rather than cleaned up after one HttpRequest is complete.
- Make sure that the minimum and maximum values for the object request
broker are set to the same value. Also make sure that the allow Growable
option is set to false. To view this administrative console page,
click Servers > Manage Application Servers >
server_name > ORB Service > Thread Pool. (You
can reach this page through more than one navigational route.)
- Reduce the size of the initial value of the heap space. In many cases,
you can remove this value so that the JVM can expand the heap space and
maximize the contiguous heap space.
- If it is possible, run the EJB's (that handle large objects) on one
server and use EJB local references instead of Remote References. This
avoids excessive serialization and deserialization of objects by the ORB,
which makes a copy of the object when it does this.
- Pass by reference/Value ..? To make the ORB pass data by
reference instead of by value, set the parameter
com.ibm.CORBA.iiop.noLocalCopies as described in Information
Center: Tuning
parameter index
- Set the com.ibm.CORBA.FragmentSize as described in Tune
com.ibm.CORBA.FragmentSize when large ORB Requests or Responses are
exchanged with WebSphere Application Servers.
3. Tune the JVM to minimize the potential for a
java.lang.OutOfMemoryError when allocating a large object
Some applications might need a large heap allocation from the JVM heap
to complete a transaction, but most transactions do not require such large
allocation. In other scenarios, WebSphere might need to allocate a large
data buffer to complete a transaction. The tuning described here
accommodates such transactions without adversely effecting overall system
performance.
It is possible to configure the JVM parameters so that the WebSphere®
Application Server allocates a large heap object either for a brief time
interval, such as for a single transaction or for the WebSphere
Application Server ORB or MDB. The technique is to attempt to configure
the JVM settings so that the heap can grow and then shrink back to
accommodate a large object by using the --Xminf and Xmaxf parameters.
How -Xminf and -Xmaxf parameters impact
JVM heap allocation
Two parameters determine if the heap will grow or shrink These are the
-Xminf and the -Xmaxf parameter.
- -Xminfnumber
number is a percentage; it is the target minimum free space in the
heap. This percentage determines when the JVM attempts to grow the size of
the heap. If after a GC cycle the percentage of heap free space is less
than this percentage, the JVM increases the size of the heap. The default
for -Xminf is 0.3, meaning that, under normal conditions, a Garbage
Collection (GC) causes a heap growth if the free space after GC is less
than 30%.
- -Xmaxfnumber
number is a percentage; it is the target maximum free space in the
heap. This percentage determines when the JVM attempts to shrink the size
of the heap. If after a GC cycle the percentage of heap free space is more
than this percentage, under normal conditions the heap shrinks. The
default for -Xmaxf is 0.6. With this -Xmaxf setting, GC causes a
heap shrinkage if the free space after GC is greater than 60%.
Using -Xmx and -Xmaxf to tune the JVM to accommodate large objects
For the specific situation mentioned, the parameters that must be tuned
are the -Xmx and the -Xmaxf parameter. Tune the -Xmx, which the maximum
heap size, so that it can contain the large object. Tune the -Xmaxf lower
so that the heap will contract with less heap space free.
Using the -Xgcthreads and -Xgcpolicy parameters
The -Xmaxf is to cause the heap to shrink after the large object
allocation. The phrase "under normal conditions" means that there are some
conditions under which the increase or shrinking of the heap does not
occur. Some of these are if the heap has increased on three consecutive GC
events and if the percentage of CPU time for GC is more than 16% of the
total for the JVM. The -Xgcthreads and -Xgcpolicy:optavgpause parameters
are recommended to minimize the percentage of CPU time used for GC.
One possible set of parameters is:
-Xmaxf0.4 -Xgcthreads2 -Xgcpolicy:optavgpause
Tuning the -Xmx setting for an application
For some applications, the default settings might not give the best
results. There are cases where problems might occur:
- The frequency of GCs is too high until the heap reaches a
steady state. Use -verboseGC to determine the size of the heap at a
steady state and set -Xms to this value.
- The heap is fully expanded and the occupancy level is
greater than 70%. Increase the -Xmx value so that the heap is not
more than 70% occupied; for best performance, try to make sure that the
heap never pages. The maximum heap size should fit in physical memory, if
possible.
- At 70% occupancy the frequency of GCs is too great.
Change the setting of B. The default is 0.3, which will try to maintain
30% free space by expanding the heap. A setting of 0.4, for example, will
increase this free space target to 40%, thereby reducing the frequency of
GCs.
Notes:
- When increasing the -Xmx past 1GB, refer to the JDK Diagnostic
Guide for additional settings that must be made for some operating
systems.
- For more details on the memory model of the jdk, refer to the JDK
Diagnostics Guide http://www-106.ibm.com/developerworks/java/jdk/diagnosis/.
One related section is entitled "AIX - debugging memory leaks."
Additional Hints and tips for tuning the JVM
- Make sure that the heap never pages. In other words, make
sure that the maximum heap size must fit in physical memory. One way to
ensure this is to make sure that the sum of all the Java heaps is not more
than one half of the system memory.
- Avoid using finalizer methods in your application. There
is no guarantee when a finalizer will run, and often they lead to
problems. If finalizers are used, avoid allocating objects in the
finalizer method. A verbosegc trace shows if finalizers are being called.
- Avoid compaction. A verbosegc trace shows if compaction is
occurring. Compaction is usually caused by requests for large memory
allocations. Analyze requests for large memory allocations and if there
are large arrays, avoid large memory allocations, if possible; for
example, try to split them into smaller pieces.
For more information on determining if the verboseGC events are indicating
that compaction is being done, refer to the JDK Diagnostic Guide.
- Pause times are too long. Try using
-Xgcpolicy:optavgpause. This reduces the pause times and makes them
more consistent as the heap occupancy rises. There is a cost to pay in
throughput, which varies with applications and is in the region of 5%.
For more information on JVM parameters for the JDK, see the JDK Diagnostic
Guide,
http://www-106.ibm.com/developerworks/java/jdk/diagnosis/. |
|
|
|
Cross Reference information |
Segment |
Product |
Component |
Platform |
Version |
Edition |
Application Servers |
WebSphere Application Server for z/OS |
Not Applicable |
|
|
|
|
|
|
|