Tuning Java virtual machines

The application server is a Java based process and requires a Java virtual machine (JVM) environment to run and support the Java applications running on the application server. You can configure the Java runtime environment to tune performance and system resource usage. This topic applies to the IBM Technology for Java Virtual Machine. Refer to the topic Tuning the Classic JVM if you are using the IBM Developer Kit for Java that is provided with the i5/OS product.

Before you begin

About this task

[z/OS] On the z/OS platform there is a JVM in both the controller and servant. This information applies to the JVM in the servant. Usually the JVM in the controller does not need to be tuned.

A Java runtime environment provides the execution environment for Java based applications and servers such as WebSphere Application Server. Therefore the Java configuration plays a significant role in determining performance and system resource consumption for the product, and the applications that you are running.

Supported JVMs are available from different JVM providers. This includes
  • IBM virtual machine for Javas.

    The IBM Java 5.0 and newer versions include major improvements in virtual machine technology to provide significant performance and serviceability enhancements over IBM's earlier Java execution technology. Refer to the Web site http://www.ibm.com/software/webservers/appserv/was/performance.html for more information about this new technology.

  • [Solaris] HotSpot based JVMs, such as the Java HotSpot VM on Solaris
  • [HP-UX] The HP virtual machine for Java for HP-UX

Even though JVM tuning is dependent on the JVM provider you use, there are some general tuning concepts that apply to all JVMs. These general concepts include:

The following steps provide specific instructions on how to perform the following types of tuning for each JVM. The steps do not have to be performed in any specific order.

Procedure

  1. [z/OS] Enable the JIT (Just In Time) compiler if it is not active active.

    To determine the setting for the Enable the JIT property, in the administrative console, click Servers > Application servers >server_name, and then, in the Server Infrastructure section, click Java and process management > Process definition, select either Control, Servant, or Adjunct, and then clickJava virtual machine

  2. Optimize the startup and runtime performance

    In some environments, such as a development environment, it is more important to optimize the startup performance of your application server rather than the runtime performance. In other environments, it is more important to optimize the runtime performance. By default, IBM virtual machines for Java are optimized for runtime performance, while HotSpot based JVMs are optimized for startup performance.

    The Java Just-In-Time (JIT) compiler has a big impact on whether startup or runtime performance is optimized. The initial optimization level that the compiler uses influences the length of time it takes to compile a class method, and the length of time it takes to start the server. For faster startups, you should reduce the initial optimization level that the compiler uses. However if you reduce the initial optimization level, the runtime performance of your applications might be degraded because the class methods are now compiled at a lower optimization level.

    • -Xquickstart

      This setting influences how the IBM virtual machine for Java uses a lower optimization level for class method compiles. A lower optimization level provides for faster server startups, but lowers runtime performance. If this parameter is not specified, the IBM virtual machine for Java defaults to starting with a high initial optimization level for compiles, which results in faster runtime performance, but slower server starts.

      Default: High initial compiler optimization level
      Recommended: High initial compiler optimization level
      Usage: -Xquickstart provides faster server startup.
    [z/OS] To speed up JVM initialization and improve server startup time, specify the following command line arguments in the General JVM Arguments field in the General Properties section of the Configuration Tab.
    -Xquickstart
    -Xverify:none
    

    [Solaris] HotSpot based JVMs initially compile class methods with a low optimization level. Use this JVM option to change that behavior:

  3. [z/OS] Limit the number of dumps that are taken in specific situations.

    In certain error conditions, multiple application server threads might fail and the JVM requests a TDUMP for each of those threads. This situation can cause a large number of TDUMPs to be taken concurrently leading to other problems, such. as a shortage of auxiliary storage. You can use the JAVA_DUMP_OPTS environment variable to indicate the number of dumps that you want the JVM to produce in certain situations. However it does not affect the number of TDUMPS that are generated because of com.ibm.jvm.Dump.SystemDump() calls from applications that are running on the application server.

    For example, if you specify the JAVA_DUMP_OPTS variable with the following options, the JVM:
    • Restricts the number of TDUMPs taken to one.
    • Restrict the number of JAVADUMPs taken to a maximum of three.
    • Does not capture any documentation if an INTERRUPT occurs.
    JAVA_DUMP_OPTS=ONANYSIGNAL(JAVADUMP[3],SYSDUMP[1]),ONINTERRUPT(NONE) 

    See the IBM Developer Kit Diagnostics Guide for more information on using the JAVA_DUMP_OPTS environment variable.

  4. Configure the heap size

    [AIX HP-UX Linux Solaris Windows] [z/OS] The Java heap parameters influence the behavior of garbage collection. Increasing the heap size supports more object creation. Because a large heap takes longer to fill, the application runs longer before a garbage collection occurs. However, a larger heap also takes longer to compact and causes garbage collection to take longer.

    [iSeries] The heap size settings control garbage collection in the JVM that is provide with i5/OS. The initial heap size is a threshold that triggers new garbage collection cycles. For example, if the initial heap size is 10 MB, a new collection cycle is triggered as soon as the JVM detects that since the last collection cycle, 10 MB are allocated.

    [iSeries] Smaller heap sizes result in more frequent garbage collections than larger heap sizes. If the maximum heap size is reached, the garbage collector stops operating asynchronously, and user threads are forced to wait for collection cycles to complete.

    [iSeries] The maximum heap size can affect application performance. The maximum heap size specifies the maximum amount of object space the garbage collected heap can consume. If the maximum heap size is too small, performance might degrade significantly, or the application might receive out of memory errors when the maximum heap size is reached.

    The JVM has thresholds it uses to manage the JVM's storage. When the thresholds are reached, the garbage collector gets invoked to free up unused storage. Therefore, garbage collection can cause significant degradation of Java performance. Before changing the initial and maximum heap sizes, you should consider the following information:
    • In the majority of cases you should set the maximum JVM heap size to value higher than the initial JVM heap size. This allows for the JVM to operate efficiently during normal, steady state periods within the confines of the initial heap but also to operate effectively during periods of high transaction volume by expanding the heap up to the maximum JVM heap size. In some rare cases where absolute optimal performance is required you might want to specify the same value for both the initial and maximum heap size. This will eliminate some overhead that occurs when the JVM needs to expand or contract the size of the JVM heap. Make sure the region is large enough to hold the specified JVM heap.
    • Beware of making the Initial Heap Size too large. While a large heap size initially improves performance by delaying garbage collection, a large heap size ultimately affects response time when garbage collection eventually kicks in because the collection process takes more time.

    The IBM Developer Kit and Runtime Environment, Java2 Technology Edition, Version 5.0 Diagnostics Guide, that is available on the developerWorks Web site, provides additional information on tuning the heap size.

    [z/OS] Java Heap information is contained in SMF records and can be viewed dynamically using the console command DISPLAY,JVMHEAP.

    To use the administrative console to configure the heap size:

    1. In the administrative console, click Servers > Application Servers > server.
    2. [AIX HP-UX Linux Solaris Windows] [iSeries] in the Server Infrastructure section , click Java and Process Management > Process Definition > Java Virtual Machine.
    3. [z/OS] In the Server Infrastructure section, click Java and Process Management > Process Definition.
    4. [z/OS] Select either Control or Servant, and then select Java Virtual Machine.
    5. Specify a new value in either the Initial heap size or the Maximum heap size field.

      You can also specify values for both fields if you need to adjust both settings.

      bestprac: For performance analysis, the initial and maximum heap sizes should be equal.

      The Initial heap size setting specifies, in megabytes, the amount of storage that is allocated for the JVM heap when the JVM starts. The Maximum heap size setting specifies, in megabytes, the maximum amount of storage that can be allocated to the JVM heap. Both of these settings have a significant effect on performance.

      [AIX HP-UX Linux Solaris Windows] [z/OS] When tuning a production system where the working set size of the Java application is not understood, a good starting value for the initial heap size is 25% of the maximum heap size. The JVM then tries to adapt the size of the heap to the working set size of the application.



      The illustration represents three CPU profiles, each running a fixed workload with varying Java heap settings. In the middle profile, the initial and maximum heap sizes are set to 128MB. Four garbage collections occur. The total time in garbage collection is about 15% of the total run. When the heap parameters are doubled to 256MB, as in the top profile, the length of the work time increases between garbage collections. Only three garbage collections occur, but the length of each garbage collection is also increased. In the third profile, the heap size is reduced to 64MB and exhibits the opposite effect. With a smaller heap size, both the time between garbage collections and the time for each garbage collection are shorter. For all three configurations, the total time in garbage collection is approximately 15%. This example illustrates an important concept about the Java heap and its relationship to object utilization. There is always a cost for garbage collection in Java applications.

      [AIX HP-UX Linux Solaris Windows] Run a series of test experiments that vary the Java heap settings. For example, run experiments with 128MB, 192MB, 256MB, and 320MB. During each experiment, monitor the total memory usage. If you expand the heap too aggressively, paging can occur. Use the vmstat command or the Windows 2000/2003 Performance Monitor to check for paging. If paging occurs, reduce the size of the heap or add more memory to the system. When all the runs are finished, compare the following statistics:
      • Number of garbage collection calls
      • Average duration of a single garbage collection call
      • Ratio between the length of a single garbage collection call and the average time between calls
      If the application is not over utilizing objects and has no memory leaks, the state of steady memory utilization is reached. Garbage collection also occurs less frequently and for short duration.
      [z/OS] [iSeries] Run a series of test experiments that vary the Java heap settings. For example, run experiments with 128MB, 192MB, 256MB, and 320MB. During each experiment, monitor the total memory usage. If you expand the heap too aggressively, paging can occur. If paging occurs, reduce the size of the heap or add more memory to the system. When all the runs are finished, compare the following statistics:
      • Number of garbage collection calls
      • Average duration of a single garbage collection call
      • Ratio between the length of a single garbage collection call and the average time between calls
      If the application is not over utilizing objects and has no memory leaks, the state of steady memory utilization is reached. Garbage collection also occurs less frequently and for short duration.

      [AIX HP-UX Linux Solaris Windows] [z/OS] If the heap free space settles at 85% or more, consider decreasing the maximum heap size values because the application server and the application are under-utilizing the memory allocated for heap.

      [z/OS] If you have servers configured to run in 64-bit mode, you can specify a JVM maximum heap size for those servers that is significantly larger than the default setting. For example, you can specify an initial maximum heap size of 1844m for the controller and the servant if the server is configured to run in 64-bit mode.

    6. Click Apply or OK.
    7. Save your changes to the master configuration.
    8. Stop and restart the application server.

    You can also use the following command line parameters to adjust these settings. These parameters apply to all supported JVMs and are used to adjust the minimum and maximum heap size for each application server or application server instance.

    • -Xms

      This setting controls the initial size of the Java heap. Properly tuning this parameter reduces the overhead of garbage collection, which improves server response time and throughput. For some applications, the default setting for this option might be too low, which causes a high number of minor garbage collections.

      Default: 50MB. This default value applies for both 31-bit and 64-bit configurations.
      Recommended: Workload specific, but higher than the default.
      Usage: -Xms256m sets the initial heap size to 256 megabytes.
    • -Xmx

      This setting controls the maximum size of the Java heap. Increasing this parameter increases the memory available to the application server, and reduces the frequency of garbage collection. Increasing this setting can improve server response time and throughput. However, increasing this setting also increases the duration of a garbage collection when it does occur. This setting should never be increased above the system memory available for the application server instance. Increasing the setting above the available system memory can cause system paging and a significant decrease in performance.

      Default: 256MB. This default value applies for both 31-bit and 64-bit configurations.
      Recommended: Workload specific, but higher than the default, depending on the amount of available physical memory.
      Usage: -Xmx512m sets the maximum heap size to 512 megabytes.
    • [AIX] [Windows] -Xlp

      This setting is used with the IBM virtual machine for Java to allocate the heap when using large pages (16MB). However, if you use this setting your operating system must be configured to support large pages. Using large pages can reduce the CPU overhead needed to keep track of heap memory, and might also allow the creation of a larger heap.

      Default
      64 KB if you are using Java 6 SR 7 or higher

      4 KB if you are using Java 6 SR 6 or lower

       

      See Tuning operating systems for more information about tuning your operating system.

    • [iSeries] [AIX] –Xlp64k

      This parameter can be used to allocate the heap using medium size pages, such as 64 KB. Using this virtual memory page size for the memory that an application requires can improve the performance and throughput of the application because of hardware efficiencies that are associated with a larger page size.

      [iSeries] [AIX] [Updated in September 2011] i5/OS and AIX® provide rich support around 64 KB pages because 64 KB pages are intended to be general purpose pages. 64 KB pages are easy to enable, and applications might receive performance benefits when 64 KB pages are used. Starting with Java 6 SR 7, the Java heap is allocated with 64K pages by default. For Java 6 SR 6 or earlier, 4K pages is the default setting, This setting can be changed without changing the operating system configuration. However, it is recommended that you run your application servers in a separate storage pool if you use of 64KB pages. [Updated in September 2011]

      sep2011

      [AIX] [Updated in September 2011] If you are using Java 6 SR 6 or earlier, to support a 64 KB page size, in the administrative console, click Servers > Application servers > server_name > Process definition > Environment entries > New, and then specify LDR_CNTRL in the Name field and DATAPSIZE=64K@TEXTPSIZE=64K@STACKPSIZE=64K in the Value field. [Updated in September 2011]

      sep2011

      Recommended Use 64 KB page size whenever possible.

      [iSeries] i5/OS POWER5+ systems, and i5/OS Version 6, Release 1, support a 64 KB page size.

      [AIX] POWER5+ systems, and AIX 5L Version 5.3 with the 5300-04 Recommended Maintenance Package support a 64 KB page size when they are running the 64-bit kernel.

    • [iSeries] [AIX] [Updated in September 2011] –Xlp4k

      This parameter can be used to allocate the heap using 4 KB pages. Using this virtual memory page size for the memory that an application requires, instead of 64 KB, might negatively impact performance and throughput of the application because of hardware inefficiencies that are associated with a smaller page size.

      [iSeries] [AIX] [Updated in September 2011] Starting with Java 6 SR 7, the Java heap is allocated with 64K pages by default. For Java 6 SR 6 or earlier, 4K pages is the default setting, This setting can be changed without changing the operating system configuration. However, it is recommended that you run your application servers in a separate storage pool if you use of 64KB pages. [Updated in September 2011]

      sep2011

      [AIX] [Updated in September 2011] If you are using Java 6 SR 7 or higher, to support a 4 KB page size, in the administrative console, click Servers > Application servers > server_name > Process definition > Environment entries > New, and then specify LDR_CNTRL in the Name field and DATAPSIZE=4K@TEXTPSIZE=4K@STACKPSIZE=4K in the Value field. [Updated in September 2011]

      sep2011

      Recommended Use -Xlp64k instead of -Xlp4k whenever possible.
      [Updated in September 2011]
      sep2011
  5. Tune Java memory
    Enterprise applications written in the Java language involve complex object relationships and utilize large numbers of objects. Although, the Java language automatically manages memory associated with object life cycles, understanding the application usage patterns for objects is important. In particular, you should verify that:
    • The application is not over utilizing objects.
    • The application is not leaking objects.
    • The Java heap parameters are set properly to handle a given object usage pattern.
    1. Check for over-utilization of objects.

      [iSeries] You can use the Tivoli Performance Viewer to check You can use the Tivoli Performance Viewer to observe the counters for the JVM runtime. This information indicates whether the application is overusing objects. Refer to the topic Enabling the Java virtual machine profiler data for more information JVMPI counters.

      [iSeries] You can also use the following tools to monitor JVM object creation:
      • The WRKJVMJOB command. WRKJVMJOB (Work JVM Jobs) command allows the user to list and monitor Java Virtual Machines running in active jobs. This command is available in i5/OS Version 6, Release 1, and higher.
      • The GENJVMDMP command. The (Generate JVM Dump) command generates JVM dumps for a specific job. This command is available in i5/OS Version 6, Release 1, and higher.

      [AIX HP-UX Linux Solaris Windows] You can use the Tivoli Performance Viewer to check if the application is overusing objects, by observing the counters for the JVM runtime. You have to set the -XrunpmiJvmpiProfiler command line option, as well as the JVM module maximum level in order to enable the Java virtual machine profiler interface (JVMPI) counters.

      [iSeries] [AIX HP-UX Linux Solaris Windows] The best result for the average time between garbage collections is at least 5-6 times the average duration of a single garbage collection. If you do not achieve this number, the application is spending more than 15 percent of its time in garbage collection.

      [z/OS] You can check if the application is overusing objects, by observing the counters for the JVM runtime. You have to set the -XrunpmiJvmpiProfiler command line option, as well as the JVM module maximum level in order to enable the Java virtual machine profiler interface (JVMPI) counters. The best result for the average time between garbage collections is at least 5-6 times the average duration of a single garbage collection. If you do not achieve this number, the application is spending more than 15% of its time in garbage collection.

      If the information indicates a garbage collection bottleneck, there are two ways to clear the bottleneck. The most cost-effective way to optimize the application is to implement object caches and pools. Use a Java profiler to determine which objects to target. If you can not optimize the application, adding memory, processors and clones might help. Additional memory allows each clone to maintain a reasonable heap size. Additional processors allow the clones to run in parallel.

    2. Test for memory leaks

      Memory leaks in the Java language are a dangerous contributor to garbage collection bottlenecks. Memory leaks are more damaging than memory overuse, because a memory leak ultimately leads to system instability. Over time, garbage collection occurs more frequently until the heap is exhausted and the Java code fails with a fatal out-of-memory exception. Memory leaks occur when an unused object has references that are never freed. Memory leaks most commonly occur in collection classes, such as Hashtable because the table always has a reference to the object, even after real references are deleted.

      High workload often causes applications to crash immediately after deployment in the production environment. This is especially true for leaking applications where the high workload accelerates the magnification of the leakage and a memory allocation failure occurs.

      The goal of memory leak testing is to magnify numbers. Memory leaks are measured in terms of the amount of bytes or kilobytes that cannot be garbage collected. The delicate task is to differentiate these amounts between expected sizes of useful and unusable memory. This task is achieved more easily if the numbers are magnified, resulting in larger gaps and easier identification of inconsistencies. The following list contains important conclusions about memory leaks:
      • Long-running test

        Memory leak problems can manifest only after a period of time, therefore, memory leaks are found easily during long-running tests. Short running tests can lead to false alarms. It is sometimes difficult to know when a memory leak is occurring in the Java language, especially when memory usage has seemingly increased either abruptly or monotonically in a given period of time. The reason it is hard to detect a memory leak is that these kinds of increases can be valid or might be the intention of the developer. You can learn how to differentiate the delayed use of objects from completely unused objects by running applications for a longer period of time. Long-running application testing gives you higher confidence for whether the delayed use of objects is actually occurring.

      • Repetitive test

        In many cases, memory leak problems occur by successive repetitions of the same test case. The goal of memory leak testing is to establish a big gap between unusable memory and used memory in terms of their relative sizes. By repeating the same scenario over and over again, the gap is multiplied in a very progressive way. This testing helps if the number of leaks caused by the execution of a test case is so minimal that it is hardly noticeable in one run.

        You can use repetitive tests at the system level or module level. The advantage with modular testing is better control. When a module is designed to keep the private module without creating external side effects such as memory usage, testing for memory leaks is easier. First, the memory usage before running the module is recorded. Then, a fixed set of test cases are run repeatedly. At the end of the test run, the current memory usage is recorded and checked for significant changes. Remember, garbage collection must be suggested when recording the actual memory usage by inserting System.gc() in the module where you want garbage collection to occur, or using a profiling tool, to force the event to occur.

      • Concurrency test

        Some memory leak problems can occur only when there are several threads running in the application. Unfortunately, synchronization points are very susceptible to memory leaks because of the added complication in the program logic. Careless programming can lead to kept or unreleased references. The incident of memory leaks is often facilitated or accelerated by increased concurrency in the system. The most common way to increase concurrency is to increase the number of clients in the test driver.

        Consider the following points when choosing which test cases to use for memory leak testing:
        • A good test case exercises areas of the application where objects are created. Most of the time, knowledge of the application is required. A description of the scenario can suggest creation of data spaces, such as adding a new record, creating an HTTP session, performing a transaction and searching a record.
        • Look at areas where collections of objects are used. Typically, memory leaks are composed of objects within the same class. Also, collection classes such as Vector and Hashtable are common places where references to objects are implicitly stored by calling corresponding insertion methods. For example, the get method of a Hashtable object does not remove its reference to the retrieved object.
      [iSeries] You can use these tools to detect memory leaks:
      • Tivoli Performance Viewer. Refer to the topic Enabling the Java virtual machine profiler data for more information about how to use this tool.
      • The WRKJVMJOB command. WRKJVMJOB (Work JVM Jobs) command allows the user to list and monitor Java Virtual Machines running in active jobs. This command is available in i5/OS Version 6, Release 1, and higher.
      • The GENJVMDMP command. The (Generate JVM Dump) command generates JVM dumps for a specific job. This command is available in i5/OS Version 6, Release 1, and higher.

      [AIX HP-UX Linux Solaris Windows] You can use the Tivoli Performance Viewer to help find memory leaks.

      [iSeries] [AIX HP-UX Linux Solaris Windows] For the best results, repeat experiments with increasing duration, like 1000, 2000, and 4000 page requests. The Tivoli Performance Viewer graph of used memory should have a sawtooth shape. Each drop on the graph corresponds to a garbage collection. There is a memory leak if one of the following occurs:
      • The amount of memory used immediately after each garbage collection increases significantly. The sawtooth pattern looks more like a staircase.
      • The jagged pattern has an irregular shape.

      [iSeries] Also, look at the difference between the number of objects allocated and the number of objects freed. If the gap between the two increases over time, there is a memory leak.

      Heap consumption indicating a possible leak during a heavy workload (the application server is consistently near 100% CPU utilization), yet appearing to recover during a subsequent lighter or near-idle workload, is an indication of heap fragmentation. Heap fragmentation can occur when the JVM can free sufficient objects to satisfy memory allocation requests during garbage collection cycles, but the JVM does not have the time to compact small free memory areas in the heap to larger contiguous spaces.

      Another form of heap fragmentation occurs when small objects (less than 512 bytes) are freed. The objects are freed, but the storage is not recovered, resulting in memory fragmentation until a heap compaction has been run.

      [AIX HP-UX Linux Solaris Windows] [z/OS] Heap fragmentation can be reduced by forcing compactions to occur, but there is a performance penalty for doing this. Use the Java -X command to see the list of memory options.

  6. Tune garbage collection

    [z/OS] The Java virtual machine (JVM) uses a parallel garbage collector to fully exploit an SMP during most garbage collection cycles. The HotSpot based JVMs have a single-threaded garbage collector.

    Examining Java garbage collection gives insight to how the application is utilizing memory. Garbage collection is a Java strength. By taking the burden of memory management away from the application writer, Java applications are more robust than applications written in languages that do not provide garbage collection. This robustness applies as long as the application is not abusing objects. Garbage collection normally consumes from 5% to 20% of total execution time of a properly functioning application. If not managed, garbage collection is one of the biggest bottlenecks for an application.

    Monitoring garbage collection during the execution of a fixed workload, enables you to gain insight as to whether the application is over-utilizing objects. Garbage collection can even detect the presence of memory leaks.

    You can use JVM settings to configure the type and behavior of garbage collection. When the JVM cannot allocate an object from the current heap because of lack of contiguous space, the garbage collector is invoked to reclaim memory from Java objects that are no longer being used. Each JVM vendor provides unique garbage collector policies and tuning parameters.

    You can use the Verbose garbage collection setting in the administrative console to enable garbage collection monitoring. The output from this setting includes class garbage collection statistics. The format of the generated report is not standardized between different JVMs or release levels.

    [iSeries] To ensure meaningful statistics, run a fixed workload until the application state is steady. It usually takes several minutes to reach a steady state.

    [iSeries] You can also use object statistics in the Tivoli Performance Viewer to monitor garbage collection statistics.

    For more information about monitoring garbage collection, refer to the following documentation:
    • Performance: Resources for learning for a description of the IBM verbose:gc output
    • [iSeries] The description of the WRKJVMJOB command in the i5/OS Information Center. The WRKJVMJOB (Work JVM Jobs) command allows the user to list and monitor Java Virtual Machines running in active jobs. This command is available in i5/OS Version 6, Release 1 and higher.
    • [iSeries] The description of the GENJVMDMP command in the i5/OS Information Center. The GENJVMDMP (Generate JVM Dump) command generates JVM dumps for a specific job. This command is available in i5/OS Version 6, Release 1 and higher.

    To adjust your JVM garbage collection settings:

    1. In the administrative console, click Servers > Application Servers > server.
    2. [AIX HP-UX Linux Solaris Windows] [iSeries] Under Server Infrastructure, click Java and Process Management > Process Definition > Java Virtual Machine.
    3. [z/OS] Under Server Infrastructure, click Java and Process Management > Process Definition.
    4. [z/OS] Select either Control or Servant, and then select Java Virtual Machine.
    5. Enter the –X option you want to change in the Generic JVM arguments field.
    6. Click OK.
    7. Save your changes to the master configuration.
    8. Stop and restart the application server.

    For more information about the –X options for the different JVM garbage collectors refer to the following:

    The IBM virtual machine for Java garbage collector.
    A complete guide to the IBM Java garbage collector is provided in the IBM Developer Kit and Runtime Environment, Java2 Technology Edition, Version 5.0 Diagnostics Guide. This document is available on the developerWorks Web site.

    [AIX HP-UX Linux Solaris Windows] [z/OS] Use the Java -X option to view a list of memory options.

    • [AIX HP-UX Linux Solaris Windows] [z/OS] -Xgcpolicy
      Starting with Java 5.0, the IBM virtual machine for Java provides four policies for garbage collection. Each policy provides unique benefits.
      • optthruput, which is the default, provides high throughput but with longer garbage collection pause times. During a garbage collection, all application threads are stopped for mark, sweep and compaction, when compaction is needed. optthruput is sufficient for most applications.
      • optavgpause, which reduces garbage collection pause time by performing the mark and sweep phases of garbage collection concurrently with application execution. This concurrent execution cause a small performance impact to overall throughput.
      • gencon, which is new in IBM Java 5.0, is a generational garbage collector for the IBM virtual machine for Java. The generational scheme attempts to achieve high throughput along with reduced garbage collection pause times. To accomplish this goal, the heap is split into new and old segments. Long lived objects are promoted to the old space while short-lived objects are garbage collected quickly in the new space. The gencon policy provides significant benefits for many applications, but is not suited to all applications and is generally more difficult to tune.
      • subpool, which can increase performance on multiprocessor systems, that commonly use more then 8 processors. This policy is only available on IBM pSeries and zSeries processors. The subpool policy is similar to the optthruput policy except that the heap is divided into subpools that provide improved scalability for object allocation.
      Default: optthruput
      Recommended: optthruput
      Usage: Xgcpolicy:optthruput sets the garbage collection to optthruput

      Setting gcpolicy to optthruput disables concurrent mark. You should get the best throughput results when you use the optthruput policy unless you are experiencing erratic application response times, which is an indication that you might have pause time problems

      Setting gcpolicy to optavgpause enables concurrent mark with its default values. This setting alleviates erratic application response times that normal garbage collection causes. However, this option might decrease overall throughput.

    • -Xnoclassgc

      By default, the JVM unloads a class from memory whenever there are no live instances of that class left. The overhead of loading and unloading the same class multiple times, can decrease performance.

      Avoid trouble Avoid trouble: You can use the -Xnoclassgc argument to disable class garbage collection. However, the performance impact of class garbage collection is typically minimal, and turning off class garbage collection in a Java Platform, Enterprise Edition (Java EE) based system, with its heavy use of application class loaders, might effectively create a memory leak of class data, and cause the JVM to throw an Out-of-Memory Exception.gotcha

      When this option is used, if you have to redeploy an application, you should always restart the application server to clear the classes and static data from the pervious version of the application.

      Default: Class garbage collection is enabled.
      Recommended:

      [AIX HP-UX Linux Solaris Windows] [z/OS] Do not disable class garbage collection.

      [iSeries] Do not disable class garbage collection.

      Usage: Xnoclassgc disables class garbage collection.
    The Sun JVM garbage collector. [Solaris]

    On the Solaris platform, an application server runs on the Java HotSpot VM rather than the IBM virtual machine for Java. It is important to use the correct tuning parameters with the Sun JVM in order to utilize its performance optimizing features.

    The Java HotSpot VM relies on generational garbage collection to achieve optimum performance. The following command line parameters are useful for tuning garbage collection.

    • -XX:SurvivorRatio

      The Java heap is divided into a section for old (long lived) objects and a section for young objects. The section for young objects is further subdivided into the section where new objects are allocated, called eden, and the section where new objects that are still in use survive their first few garbage collections before being promoted to old objects, called survivor space. Survivor ratio is the ratio of eden to survivor space in the young object section of the heap. Increasing this setting optimizes the JVM for applications with high object creation and low object preservation. Because WebSphere Application Server instances generate more medium and long lived objects than other application servers, this setting should be lowered from the default.

      Default: 32
      Recommended: 16
      Usage: -XX:SurvivorRatio=16
    • -XX:PermSize

      The section of the heap reserved for the permanent generation holds all of the reflective data for the JVM. This size should be increased to optimize the performance of applications that dynamically load and unload a lot of classes. Setting this to a value of 128 megabytes eliminates the overhead of increasing this part of the heap.

      Recommended: 128m
      Usage: XX:PermSize=128m sets perm size to 128 megabytes.
    • -Xmn

      This setting controls how much space the young generation is allowed to consume on the heap. Properly tuning this parameter can reduce the overhead of garbage collection, improving server response time and throughput. The default setting for this is typically too low, resulting in a high number of minor garbage collections. Setting this setting too high can cause the JVM to only perform major (or full) garbage collections. These usually take several seconds and are extremely detrimental to the overall performance of your server. You must keep this setting below half of the overall heap size to avoid this situation.

      Default: 2228224 bytes
      Recommended: Approximately 1/4 of the total heap size
      Usage: -Xmn256m sets the size to 256 megabytes.
    • -Xnoclassgc

      By default, the JVM unloads a class from memory whenever there are no live instances of that class left. The overhead of loading and unloading the same class multiple times, can decrease performance.

      Avoid trouble Avoid trouble: You can use the -Xnoclassgc argument to disable class garbage collection. However, the performance impact of class garbage collection is typically minimal, and turning off class garbage collection in a Java Platform, Enterprise Edition (Java EE) based system, with its heavy use of application class loaders, might effectively create a memory leak of class data, and cause the JVM to throw an Out-of-Memory Exception.gotcha

      When this option is used, if you have to redeploy an application, you should always restart the application server to clear the classes and static data from the pervious version of the application.

      Default: Class garbage collection is enabled.
      Recommended:

      [AIX HP-UX Linux Solaris Windows] [z/OS] Do not disable class garbage collection.

      [iSeries] Do not disable class garbage collection.

      Usage: Xnoclassgc disables class garbage collection.

    For additional information on tuning the Java HotSpot virtual machine, see Performance Documentation for the Java HotSpot VM.

    The HP virtual machine for Java garbage collector. [HP-UX]

    The HP virtual machine for Java relies on generational garbage collection to achieve optimum performance. The following command line parameters are useful for tuning garbage collection.

    • -XX:+AggressiveHeap

      This setting allows the JVM to automatically and aggressively tune the Java heap. If you are passing multiple arguments to the JVM, you should specify this argument first because it can cause some of the other arguments, such as the heap size settings, to be overridden. The aggressive tuning algorithm respects the constraints of any argument that is passed to the JVM after the -XX:+AggressiveHeap argument.

      Default: off
      Recommended: on
      Usage: -XX:+AggressiveHeap enables automatic tuning of the Java heap
    • -Xoptgc

      This setting optimizes the JVM for applications with many short-lived objects. If this parameter is not specified, the JVM usually does a major (full) garbage collection. Full garbage collections can take several seconds and can significantly degrade server performance.

      Default: off
      Recommended: on
      Usage: -Xoptgc enables optimized garbage collection.
    • -XX:SurvivorRatio

      The Java heap is divided into a section for old (long lived) objects and a section for young objects. The section for young objects is further subdivided into the section where new objects are allocated, called eden, and the section where new objects that are still in use survive their first few garbage collections before being promoted to old objects, called survivor space. Survivor ratio is the ratio of eden to survivor space in the young object section of the heap. Increasing this setting optimizes the JVM for applications with high object creation and low object preservation. Because WebSphere Application Server instances generate more medium and long lived objects than other application servers, this setting should be lowered from the default.

      Default: 32
      Recommended: 16
      Usage: -XX:SurvivorRatio=16
    • -XX:MaxTenuring Threshold

      This setting specifies the number of collections that an object can remain in the new generation before it is promoted to the old generation. Raising the value specified for this setting forces objects to stay in the new generation longer. Lowering the value specified for this setting causes objects to get promoted to the old generation sooner.

      Default: 31
      Recommended: 32
      Usage: -XX:MaxTenuringThreshold=32
    • -XX:+ForceMmapReserved

      This command disables the lazy swap functionality and allows the operating system to use larger memory pages, thereby optimizing access to the memory that makes up the Java heap. By default, the Java heap is allocated lazy swap space. Lazy swap functionality saves swap space because pages of memory are allocated as needed. However, the lazy swap functionality forces the use of 4KB pages. In large heap systems, this allocation of memory can spread the heap across hundreds of thousands of pages.

      Default: off
      Recommended: on
      Usage: -XX:+ForceMmapReserved disables the lazy swap functionality.
    • -XX:+UseParallelGC

      This command enables parallel garbage collection for the new generation. Issuing this command on a multi-processor system, can decrease the amount of time that it takes the JVM to complete a partial garbage collection cycle.

      Default: off
      Recommended: on
      Usage: -XX:+UseParallelGC enables parallel garbage collection for the new generation.
    • -XX:+UseParallelOldGC

      This command enables parallel garbage collection for the old generation. Issuing this command on a multi-processor system can decrease the amount of time that it takes for the JVM to complete a full garbage collection cycle.

      Default: off
      Recommended: on
      Usage: -XX:+UseParallelOldGC enables parallel garbage collection for the old generation
    • -Xmn

      This setting controls how much space the young generation is allowed to consume on the heap. Properly tuning this parameter can reduce the overhead of garbage collection, improving server response time and throughput. The default setting for this is typically too low, resulting in a high number of minor garbage collections.

      Default: No default
      Recommended: Approximately 1/4 of the total heap size
      Usage: -Xmn256m sets the size to 256 megabytes.
    • Virtual Page Size

      Setting the Java virtual machine instruction and data page sizes to 64MB can improve performance.

      Default: 4MB
      Recommended: 64MB
      Usage: Use the following command. The command output provides the current operating system characteristics of the process executable:
      chatr +pi64M +pd64M /opt/WebSphere/
      AppServer/java/bin/PA_RISC2.0/
      native_threads/java 
    • -Xnoclassgc

      By default, the JVM unloads a class from memory whenever there are no live instances of that class left. The overhead of loading and unloading the same class multiple times, can decrease performance.

      Avoid trouble Avoid trouble: You can use the -Xnoclassgc argument to disable class garbage collection. However, the performance impact of class garbage collection is typically minimal, and turning off class garbage collection in a Java Platform, Enterprise Edition (Java EE) based system, with its heavy use of application class loaders, might effectively create a memory leak of class data, and cause the JVM to throw an Out-of-Memory Exception.gotcha

      When this option is used, if you have to redeploy an application, you should always restart the application server to clear the classes and static data from the pervious version of the application.

      Default: Class garbage collection is enabled.
      Recommended:

      [AIX HP-UX Linux Solaris Windows] [z/OS] Do not disable class garbage collection.

      [iSeries] Do not disable class garbage collection.

      Usage: Xnoclassgc disables class garbage collection.

    For additional information on tuning the HP virtual machine, see Java technology software HP-UX 11i.

  7. [HP-UX] Tune the HP virtual machine for Java for HP-UX Set the following options to improve application performance:
    -XX:SchedulerPriorityRange=SCHED_NOAGE 
    -XX:-ExtraPollBeforeRead
    -XX:+UseSpinning
    -Djava.nio.channels.spi.SelectorProvider=sun.nio.ch.DevPollSelectorProvider

    [HP-UX] For additional information on tuning the HP virtual machine, see Java technology software HP-UX 11i.

  8. [Solaris] Select either client or server mode for the Java HotSpot VM on Solaris.

    The Java Virtual Machine that WebSphere Application Server uses on the Solaris platform runs in two modes: client or server. Each mode has its advantages.

    Client mode is a good mode to select if your environment:
    • Requires quick recovery after a server reboot or crash. Client mode allows the virtual machine to warm up faster, which lets an application server service a large number of requests very quickly after startup.
    • Has physical RAM limitations. Client mode uses less memory than server mode uses. This memory savings is more significant if your overall JVM size is small because of hardware limitations. For example, your overall JVM size might be small because you are running several JVMs on a single piece of hardware.

    If you want to maximize performance on application servers that are rarely restarted you should run the HotSpot JVM in server mode. When the JVM is in server mode, it takes several times longer for an application server to get to a state where it can service a large number of requests. However, after it gets to that state, server mode can significantly out perform a comparable JVM running in client mode.

    The HotSpot JVM running in server mode uses a high optimization compiler that optimizes and re-optimizes the Java code during the initial warm up stage. All of this optimization work takes awhile, but once the JVM is warmed up, application servers run significantly faster than they do in client mode on the same hardware.

    The Solaris implementation of Java 5.0 examines your hardware and tries to select the correct JVM mode for your environment. If the JVM determines that it is running on a server level machine, the JVM automatically enables server mode. In Java 1.4.2 and earlier, the default mode is client mode and must use the -server flag on the JVM command line to enable server mode.

    Because the JVM automatically enables server mode if your machine has at least 2 CPUs and 2 GB of memory, your JVMs probably default to server mode. However, you can use the -client and -server flags in the generic JVM arguments to force the virtual machine into either mode if the mode the JVM selects for you does not fit your environment.

  9. [Updated in March 2012] Enable localhost name caching By default in the IBM® SDK for Java, the static method java/net/InetAddress.getLocalHost does not cache its result. This method is used throughout WebSphere® Application Server, but particularly in administrative agents such as the deployment manager and node agent. If the localhost address of a process will not change while it is running, then it is advised to use a built-in cache for the localhost lookup by setting the com.ibm.cacheLocalHost system property to the value true. Refer to the Java virtual machine custom properties topic in the information center for instructions on setting JVM custom properties on the various types of processes.
    Note: The address for servers configured using DHCP change over time. Do not set this property unless you are using statically assigned IP addresses for your server.
    Information Value
    Default com.ibm.cacheLocalHost = false
    Recommended com.ibm.cacheLocalHost = true (see description)
    Usage [Updated in September 2012] Specifying -Dcom.ibm.cacheLocalHost=true enables the getLocalHost cache [Updated in September 2012]
    sep2012
    [Updated in March 2012]
    mar2012
  10. [AIX HP-UX Linux Solaris Windows] [z/OS] Enable class sharing in a cache.

    The share classes option of the IBM Java 2 Runtime Environment (J2RE) Version 1.5.0 lets you share classes in a cache. Sharing classes in a cache can improve startup time and reduce memory footprint. Processes, such as application servers, node agents, and deployment managers, can use the share classes option.

    [Solaris] [HP-UX] Important: The IBM J2RE 1.5.0 is currently not used on:
    • [Solaris] Solaris
    • [HP-UX] HP-UX

    If you use this option, you should clear the cache when the process is not in use. To clear the cache, either call the app_server_root/bin/clearClassCache.bat/sh utility or stop the process and then restart the process.

    If you need to disable the share classes option for a process, specify the generic JVM argument -Xshareclasses:none for that process:

    1. In the administrative console, click Servers > Application Servers > server.
    2. [AIX HP-UX Linux Solaris Windows] Under Server Infrastructure, click Java and Process Management > Process Definition > Java Virtual Machine.
    3. [z/OS] Under Server Infrastructure, click Java and Process Management > Process Definition.
    4. [z/OS] Select either Control or Servant, and then select Java Virtual Machine.
    5. Enter -Xshareclasses:none in the Generic JVM arguments field.
    6. Click OK.
    7. Save your changes to the master configuration.
    8. Stop and restart the application server.
    Default: The Share classes in a cache option is enabled.
    Recommended: Leave the share classes in a cache option enabled.
    Usage: -Xshareclasses:none disables the share classes in a cache option.
  11. [Solaris] [HP-UX] Minimize memory consumption.
    [Solaris] The default tuning preferences for the Sun Java 5.0 JVM uses more memory then previous JVM versions. This additional memory helps to maximize throughput. However, it can cause problems if you are running environments like JVM hoteling, where physical memory is usually at a premium. You can add the following tuning parameters to the generic JVM arguments if you need to tune the Sun Java 5.0 JVM for minimal memory consumption:
    -client -XX:MaxPermSize=256m -XX:-UseLargePages -XX:+UseSerialGC

    [Solaris] Setting these parameters might impact some throughput and might result in slightly slower server startup times. If you are running very large applications, you can specify a higher value for the MaxPermSize setting.

    [HP-UX] The default tuning preferences for the HP Java 5 JVM uses more memory then previous JVM versions. This additional memory helps to maximize throughput. However, it can cause problems if you are running environments like JVM hoteling, where physical memory is usually at a premium. You can add the following tuning parameters to the generic JVM arguments if you need to tune the Sun Java 5.0 JVM for minimal memory consumption:
    -XX:-UseParallelGC –XX:-UseAdaptiveSizePolicy 
    Setting these parameters might result in slightly slower server startup times.
  12. Tune the configuration update process for a large cell configuration.
    In a large cell configuration, you might need to determine whether configuration update performance or consistency checking is more important. The deployment manager maintains a master configuration repository for the entire cell. By default, when the configuration changes, the product compares the configuration in the workspace with the master repository to maintain workspace consistency. However, the consistency verification process can cause an increase in the amount of time to save a configuration change or to deploy a large number of applications. The following factors influence how much time is required:
    • The more application servers or clusters there are defined in cell, the longer it takes to save a configuration change.
    • The more applications there are deployed in a cell, the longer it takes to save a configuration change.
    If the amount of time required to change a configuration change is unsatisfactory, you can add the config_consistency_check custom property to your JVM settings and set the value of this property to false. To set this custom property, complete the following steps:
    1. In the administrative console, click System administration > Deployment manager.
    2. Under Server Infrastructure, select Java and Process Management, and then click Process Definition.
    3. Under Additional Properties, click Java Virtual Machine > Custom Properties > New.
    4. Enter config_consistency_check in the Name field and false in the Value field.
    5. Click OK and then save these changes to the master configuration.
    6. Restart the server.
    Supported configurations Supported configurations: The config_consistency_check custom property affects the deployment manager process only. It does not affect other processes including the node agent and application server processes. The consistency check is not performed on these processes. However, within the SystemOut.log files for these processes, you might see a note that the consistency check is disabled. For these non-deployment manager processes, you can ignore this message.sptcfg

    If you are using the wsadmin command wsadmin -conntype none in local mode, you must set the config_consistency_check property to false before issuing this command.

What to do next

Each Java vendor provides detailed information on performance and tuning for their JVM. Use the following Web sites to obtain additional tuning information for a specific Java runtime environments:

[iSeries] If you use DB2, consider disabling SafepointPolling technology in the HP virtual machine for Java for HP-UX. Developed to ensure safepoints for Java threads, SafepointPolling technology generates a signal that can interfere with the signal between WebSphere Application Server and a DB2 database. When this interference occurs, database deadlocks often result. Prevent the interference by starting the JVM with the -XX:-SafepointPolling option, which disables SafepointPolling during runtime.




In this information ...


IBM Redbooks, demos, education, and more

(Index)

Use IBM Suggests to retrieve related content from ibm.com and beyond, identified for your convenience.

This feature requires Internet access.

Task topic Task topic    

Terms and conditions for information centers | Feedback

Last updatedLast updated: Aug 31, 2013 4:28:44 AM CDT
http://www14.software.ibm.com/webapp/wsbroker/redirect?version=pix&product=was-nd-mp&topic=tprf_tunejvm_v61
File name: tprf_tunejvm_v61.html