This topic applies only on the IBM i operating system.

Tuning the Classic JVM (IBM i)

An application server is a Java™ based server and requires a Java virtual machine (JVM) environment to run and support the enterprise applications that run on it. As part of configuring your application server, you can configure the Classic JVM to tune performance and system resource usage. The term Classic JVM refers to the IBM® i Java Developer Kit 6.0 JVM that is provided with the IBM i product, formerly referred to as i5/OS®.

Before you begin

The classic JVM is not supported on IBM i Version 7.1 or higher. If you recently upgraded to IBM i Version 7.1 or higher, use the enablejvm command to enable your Web server plug-ins, application clients and application servers to use the 32-bit (std32) or 64-bit (std64) version of the IBM Technology for Java Virtual Machine.

About this task

Each JVM vendor provides detailed information on performance and tuning for their JVM. Use the information provided in this topic in conjunction with the information that is provided with the JVM that is running on your system.

A Java SE Runtime Environment provides the environment for running enterprise applications and application servers. Therefore the Java configuration plays a significant role in determining performance and system resource consumption for an application server and the applications that run on it.

Version 6.0 of the Classic JVM includes the latest in Java Platform, Enterprise Edition (Java EE) specifications, and provides performance and stability improvements over previous versions.

Even though JVM tuning is dependent on the JVM provider you use, there are some general tuning concepts that apply to all JVMs. These general concepts include:

The following steps provide specific instructions on how to perform the following types of tuning for each JVM. The steps do not have to be performed in any specific order.

Procedure

  1. Change the setting for the JIT compiler

    A JIT compiler is a platform-specific compiler that generates machine instructions for each method as needed. For more information about running the JIT compiler on IBM i, refer to the IBM i Information Center.

    1. In the administrative console, click Servers > Server Types > WebSphere application servers > server_name.
    2. In the section Server Infrastructure, click Java and process management > Process definition > Java virtual machine.
    3. Select the Disable JIT option if you want to disable the JIT.

      You do not have to perform this substep if you are running on IBM i 6.1 or higher. Starting with IBM i 6.1, the JIT compiler always runs with jitc. Earlier releases supported both the jitc and jitc_de options, which includes direct processing.

    4. Enter -Djava.compiler=jitc in the Generic JVM arguments field if you want to run with the full JIT compiler.
    5. Click Apply.
    6. Click Save to save changes to the master configuration.
    7. Stop and restart the application server.
    Default: JIT is enabled.
    Recommended: It is recommended that you do not disable the JIT compiler, and that you enable the full JIT compiler. The os400.jit.mmi.threshold can have a significant effect on performance. For more information about the JIT compiler and the os400.jit.mmi.threshold property, refer to the IBM i Information Center.
  2. Optimize the startup and runtime performance.

    In some environments, such as a development environment, it is more important to optimize the startup performance of your application server rather than the runtime performance. In other environments, it is more important to optimize the runtime performance.

    The Java Just-in-Time (JIT) compiler impacts whether startup or runtime performance is optimized. The initial optimization level that the compiler uses influences the length of time that is required to compile a class method, and the length of time that is required to start the server. For faster startups, reduce the initial optimization level that the compiler uses. However if you reduce the initial optimization level, the runtime performance of your applications might decrease because the class methods are now compiled at a lower optimization level.

  3. Configure the heap size.

    The heap size settings control garbage collection in the Java SE Development Kit 6 (Classic) that is provided with IBM i. The initial heap size is a threshold that triggers new garbage collection cycles. For example, if the initial heap size is 10 MB, a new collection cycle is triggered as soon as the JVM detects that 10 MB have been allocated since the last collection cycle.

    Smaller heap sizes result in more frequent garbage collections than larger heap sizes. If the maximum heap size is reached, the garbage collector stops operating asynchronously, and user threads are forced to wait for collection cycles to complete. This situation has a significantly negative impact on performance. A maximum heap size of 0 (*NOMAX) assures that garbage collection operates asynchonously.

    The maximum heap size can affect application performance. The maximum heap size specifies the maximum amount of object space that the garbage collected heap can consume. If the maximum heap size is too small, performance might decrease significantly, or the application might receive out of memory errors when the maximum heap size is reached.

    Because of the complexity of determining a correct value for the maximum heap size, a value of 0, which indicates that there is no size limit, is recommended unless an absolute limit on the object space for the garbage collected heap size is required.

    If , because of memory limitations, you need to set a maximum heap size other than *NOMAX, you should run multiple tests to determine the proper value for the maximum heap size. Running multiple tests, helps to determine the appropriate value for your configurations and workload combinations. To prevent a run-away JVM, set the maximum heap size to a value that is larger than the size to which you expect the heap to grow, but not so large that it affects the performance of the rest of the machine.

    For one of the tests you should complete the following actions:
    1. Run your application server under a heavy workload with a maximum heap value of 0.
    2. Use the DMPJVM command or iDoctor to determine the maximum size of the garbage collected heap for the JVM.
    3. Multiply the size of the garbage collection heap by 1.25. The result is a reasonable estimate for maximum heap size because the smallest acceptable value for the maximum heap size is 125 percent of the garbage collected heap size.

    Because you can specify a larger value for the maximum heap size without affecting performance, it is recommended that you set the largest possible value based on the resource restrictions of the JVM or the limitations of your system configuration.

    After you determine an appropriate value for the maximum heap size, you might need to set up or adjust the pool in which the JVM runs. By default, application server jobs run in the base system pool, which is storage pool 2 as defined by system value WRKSYSSTS. However, you can specify a different pool. Do not set the maximum heap size to a value that is larger than 125 percent of the size of the pool in which the JVM is running. It is recommended that you run the JVM in its own memory pool with the memory permanently assigned to that pool, if possible.

    If the performance adjuster is set to adjust the memory pools, that is, the system value QPFRADJ is set to a value other than 0, then it is recommended that you use the system value WRKSHRPOOL to specify a minimum size for the pool. The minimum size should be approximately equal to your garbage collected heap working set size. Setting a correct maximum heap size, and properly configuring the memory pool can prevent a JVM with a memory leak from consuming system resources, while yielding high performance.

    When a JVM must run in a shared pool, it is more difficult to determine an appropriate value for the maximum heap size. Other jobs running in the pool can cause the garbage collected heap pages to be aged out of the pool. If the garbage collected heap pages are removed from the pool because of their age, the garbage collector must fault the pages back into the pool on the next garbage collection cycle because the garbage collector requires access to all of the pages in the garbage collected heap. The Classic JVM does not stop all of the JVM threads to clean the heap, you might expect that excessive page faulting causes the garbage collector to slow down and the garbage collected heap to grow. However, the operating system automatically increases the size of the heap, and the threads continue to run.

    This heap growth is an artificial inflation of the garbage collected heap working set size, and must be considered if you want to specify a maximum heap value. When a small amount of artificial inflation occurs, the garbage collector reduces the size of the heap over time if the space remains unused and the activity in the pool returns to a steady state. However, in a shared pool, you might experience the following problems if the maximum heap size is not set correctly:
    • If the maximum heap size is too small, artificial inflation can result in severe performance degradation or system failure if the JVM experiences an out-of-memory error.
    • If the maximum heap size is set too large, the garbage collector might reach a point where it is unable to recover the artificial inflation of the garbage collected heap. In this case, performance is also negatively affected. A value that is too large might also keep the garbage collector from preventing a JVM failure. Even if the value is too large, the garbage collector can still prevent the JVM from consuming excessive amounts of system resources.

    If you must set the maximum heap size to guarantee that the heap size does not exceed a given level, specify an initial heap size that is 80 - 90 percent smaller than the maximum heap size. However, specify a value that is large enough to not negatively affect performance.

    The JVM uses defined thresholds to manage the storage that it is allocated. When the thresholds are reached, the garbage collector is invoked to free up unused storage. Therefore, garbage collection can cause significant degradation of Java performance. Before changing the initial and maximum heap sizes, you should consider the following information:
    • In the majority of cases you should set the maximum JVM heap size to a value that is higher than the initial JVM heap size. This setting allows for the JVM to operate efficiently during normal, steady state periods within the confines of the initial heap. This setting also allows the JVM to operate effectively during periods of high transaction volume because the JVM can expand the heap up to the value specified for the maximum JVM heap size. In some rare cases, where absolute optimal performance is required, you might want to specify the same value for both the initial and maximum heap size. This setting eliminates some overhead that occurs when the JVM expands or contracts the size of the JVM heap. Before changing any of the JVM heap sizes, verify that the JVM storage allocation is large enough to accommodate the new heap size.
    • Do not make the size of the initial heap so large that while it initially improves performance by delaying garbage collection, when garbage collection does occur, the collection process affects response time because the process has to run longer.

    To use the administrative console to configure the heap size:

    1. In the administrative console, click Servers > Server Types > WebSphere application servers > server_name.
    2. In the Server Infrastructure section, click Java and process management > Process definition > Java virtual machine.
    3. Specify a new value in either the Initial heap size or the Maximum heap size field.

      You can also specify values for both fields if you need to adjust both settings.

      For performance analysis, the initial and maximum heap sizes should be equal.

      The Initial heap size setting specifies, in megabytes, the amount of storage that is allocated for the JVM heap when the JVM starts. The Maximum heap size setting specifies, in megabytes, the maximum amount of storage that can be allocated to the JVM heap. Both of these settings have a significant effect on performance.

      Avoid trouble: Unlike other JVM implementations, a large amount of heap free space is not generally a concern for the Java SE Development Kit 6 (Classic) that is provided with IBM i.gotcha

      The default maximum heap size is 0, which indicates that there is no maximum value. It is recommended that you do not change the maximum heap size. When the maximum heap size triggers a garbage collection cycle, the garbage collection stops operating asynchronously. When garbage collection stops operating asynchronously, the application server cannot process user threads until the garbage collection cycle ends, which significantly lowers performance. See the IBM i Information Center for more information on initial and maximum heap sizes.

    4. Click Apply.
    5. Click Save to save your changes to the master configuration.
    6. Stop and restart the application server.

    You can also use the following command-line parameters to adjust these settings. These parameters apply to all supported JVMs and are used to adjust the minimum and maximum heap size for each application server or application server instance.

    • -Xms

      This parameter controls the initial size of the Java heap. Tuning this parameter reduces the overhead of garbage collection, which improves server response time and throughput. For some applications, the default setting for this option might be too low, which causes a high number of minor garbage collections.

      Default 96 MB
      Recommended Workload specific, but higher than the default.
      Usage Specifying -Xms256m sets the initial heap size to 256 MB.
    • -Xmx

      This parameter controls the maximum size of the Java heap. Increasing this parameter increases the memory available to the application server, and reduces the frequency of garbage collection. Increasing this setting can improve server response time and throughput. However, increasing this setting also increases the duration of a garbage collection when it does occur. This setting should never be increased above the system memory available for the application server instance. Increasing the setting above the available system memory can cause system paging and a significant decrease in performance.

      Default 0 MB
      Recommended The default maximum heap size is 0, which indicates that there is no maximum value.
      Usage Specifying -Xmx512m sets the maximum heap size to 512 MB.
  4. Tune Java memory.
    Enterprise applications written in the Java language involve complex object relationships and use large numbers of objects. Although, the Java language automatically manages memory associated with object life cycles, understanding the application usage patterns for objects is important. In particular, verify that the following conditions exist:
    • The application is not over utilizing objects
    • The application is not leaking objects
    • The Java heap parameters are set properly to handle a given object usage pattern
    1. Check for over-utilization of objects.
      You can also use the following tools to monitor JVM object creation:
      • The IBM i DMPJVM command. The DMPJVM command dumps JVM information for a specific job.
      • The IBM i ANZJVM command. The ANZJVM (Analyze Java Virtual Machine) command collects information about the Java Virtual Machine (JVM) for a specified job. This command is available in IBM i 5.2 and higher.
      • The Performance Trace Data Visualizer (PTDV)
      • The Heap Analysis Tools for Java. This set of tools, which are sometimes referred to as Java Watcher or Heap Analyzer, are a component of the iDoctor for iSeries® suite of performance monitoring tools. The Heap Analysis Tools perform Java application heap analysis and object create profiling, including size and identification, over time.

      The optimal result for the average time between garbage collections is at least five to six times the average duration of a single garbage collection. If you do not achieve this number, the application is spending more than 15 percent of its time in garbage collection.

      If the information indicates a garbage collection bottleneck, there are two ways to clear the bottleneck. The most cost-effective way to optimize the application is to implement object caches and pools. Use a Java profiler to determine which objects to target. If you can not optimize the application, try adding memory, processors and clones. Additional memory allows each clone to maintain a reasonable heap size. Additional processors allow the clones to run in parallel.

    2. Test for memory leaks.

      Memory leaks in the Java language are a dangerous contributor to garbage collection bottlenecks. Memory leaks are more damaging than memory overuse, because a memory leak ultimately leads to system instability. Over time, garbage collection occurs more frequently until the heap is exhausted and the Java code fails with a fatal out-of-memory exception. Memory leaks occur when an unused object has references that are never freed. Memory leaks most commonly occur in collection classes, such as Hashtable because the table always has a reference to the object, even after real references are deleted.

      High workload often causes applications to crash immediately after deployment in the production environment. These application crashes if the applications are having memory leaks because the high workload accelerates the magnification of the leakage, and a memory allocation failures occur.

      The goal of memory leak testing is to magnify numbers. Memory leaks are measured in terms of the amount of bytes or kilobytes that cannot be garbage collected. The delicate task is to differentiate these amounts between expected sizes of useful and unusable memory. This task is achieved more easily if the numbers are magnified, resulting in larger gaps and easier identification of inconsistencies. The following list provides insight on how to interpret the results of your memory leak testing:
      • Long-running test

        Memory leak problems can manifest only after a period of time, therefore, memory leaks are found easily during long-running tests. Short running tests might provide invalid indications of where the memory leaks are occurring. It is sometimes difficult to know when a memory leak is occurring in the Java language, especially when memory usage has seemingly increased either abruptly or monotonically in a given period of time. The reason it is hard to detect a memory leak is that these kinds of increases can be valid or might be the intention of the developer. You can learn how to differentiate the delayed use of objects from completely unused objects by running applications for a longer period of time. Long-running application testing gives you higher confidence for whether the delayed use of objects is actually occurring.

      • Repetitive test

        In many cases, memory leak problems occur by successive repetitions of the same test case. The goal of memory leak testing is to establish a big gap between unusable memory and used memory in terms of their relative sizes. By repeating the same scenario over and over again, the gap is multiplied in a very progressive way. This testing helps if the number of leaks caused by the execution of a test case is so minimal that it is hardly noticeable in one run.

        You can use repetitive tests at the system level or module level. The advantage with modular testing is better control. When a module is designed to keep the private module without creating external side effects such as memory usage, testing for memory leaks is easier. First, the memory usage before running the module is recorded. Then, a fixed set of test cases are run repeatedly. At the end of the test run, the current memory usage is recorded and checked for significant changes. Remember, garbage collection must be suggested when recording the actual memory usage by inserting System.gc() in the module where you want garbage collection to occur, or using a profiling tool, to force the event to occur.

      • Concurrency test

        Some memory leak problems can occur only when there are several threads running in the application. Unfortunately, synchronization points are very susceptible to memory leaks because of the added complication in the program logic. Careless programming can lead to kept or not-released references. The incident of memory leaks is often facilitated or accelerated by increased concurrency in the system. The most common way to increase concurrency is to increase the number of clients in the test driver.

        Consider the following points when choosing which test cases to use for memory leak testing:
        • A good test case exercises areas of the application where objects are created. Most of the time, knowledge of the application is required. A description of the scenario can suggest creation of data spaces, such as adding a new record, creating an HTTP session, performing a transaction and searching a record.
        • Look at areas where collections of objects are used. Typically, memory leaks are composed of objects within the same class. Also, collection classes such as Vector and Hashtable are common places where references to objects are implicitly stored by calling corresponding insertion methods. For example, the get method of a Hashtable object does not remove its reference to the retrieved object.
      You can use the following tools to detect memory leaks:
      • Tivoli® Performance Viewer. The IBM i Information Center provides more information about using the Tivoli Performance Viewer to detect memory leaks.
      • The IBM i DMPJVM command. The DMPJVM command dumps JVM information for a specific job.
      • The ANZJVM command . The ANZJVM command collects information about the JVM for a specified job. This command is available in IBM i 5.2 and higher.
      • The Heap Analysis Tools for Java. This set of tools, which are sometimes referred to as Java Watcher or Heap Analyzer, are a component of the iDoctor for iSeries suite of performance monitoring tools. The Heap Analysis Tools perform Java application heap analysis and object create profiling, including size and identification, over time.
      For optimal results, repeat experiments with increasing duration, such as 1,000, 2,000, and 4,000 page requests. The Tivoli Performance Viewer graph of used memory should have a jagged shape. Each drop on the graph corresponds to a garbage collection. There is a memory leak if one of the following conditions is appears in the graph:
      • The amount of memory used immediately after each garbage collection increases significantly. When this condition occurs, the jagged pattern looks more like a staircase.
      • The jagged pattern has an irregular shape.
      • The gap between the number of objects allocated and the number of objects freed increases over time.

      Heap consumption that indicates a possible leak during periods when the application server is consistently near 100 percent CPU utilization, but disappears when the workload becomes lighter or near-idle, is an indication of heap fragmentation. Heap fragmentation can occur when the JVM can free sufficient objects to satisfy memory allocation requests during garbage collection cycles, but the JVM does not have the time to compact small free memory areas in the heap to larger contiguous spaces.

      Another form of heap fragmentation occurs when objects that are less than 512 bytes are freed. The objects are freed, but the storage is not recovered, resulting in memory fragmentation until a heap compaction occurs.

  5. Tune garbage collection

    The Classic JVM uses concurrent (asynchronous) garbage collection. This type of garbage collection results in shorter pause times and allows application threads to continue processing requests during the garbage collection cycle.

    Examining Java garbage collection gives insight to how the application is utilizing memory. Garbage collection is a Java strength. By taking the burden of memory management away from the application writer, Java applications are more robust than applications written in languages that do not provide garbage collection. This robustness applies as long as the application is not abusing objects. Garbage collection typically consumes from 5 to 20 percent of total run time of a properly functioning application. If not managed, garbage collection is one of the biggest bottlenecks for an application.

    Monitoring garbage collection while a fixed workload is running, provides you with insight as to whether the application is over using objects. Garbage collection can even detect the presence of memory leaks.

    You can use JVM settings to configure the type and behavior of garbage collection. When the JVM cannot allocate an object from the current heap because of lack of contiguous space, the garbage collector is invoked to reclaim memory from Java objects that are no longer being used. Each JVM vendor provides unique garbage collector policies and tuning parameters.

    You can use the Verbose garbage collection setting in the administrative console to enable garbage collection monitoring. The output from this setting includes class garbage collection statistics. The format of the generated report is not standardized between different JVMs or release levels.

    To ensure meaningful statistics, run a fixed workload until the application state is steady. It typically takes several minutes to reach a steady state.

    You can also use object statistics in the Tivoli Performance Viewer to monitor garbage collection statistics.

    For more information about monitoring garbage collection, refer to the following documentation:
    • The description of the DMPJVM command in the IBM i Information Center. This command dumps JVM information for a specific job.
    • The iDoctor for iSeries documentation for a description of the Heap Analysis Tools for Java. This set of tools, which are sometimes referred to as Java Watcher or Heap Analyzer, are a component of the iDoctor for iSeries suite of performance monitoring tools. The Heap Analysis Tools component performs Java application heap analysis and object create profiling (size and identification) over time.
    • The description of Performance Explorer (PEX) contained in the IBM i Information Center. You can use a Performance Explorer (PEX) trace to determine how much CPU is being used by the garbage collector.

    To adjust your JVM garbage collection settings:

    1. In the administrative console, click Servers > Server Types > WebSphere application servers > server_name.
    2. In the Server Infrastructure section, click Java and process management > Process definition > Java virtual machine
    3. Enter the –X option you want to change in the Generic JVM arguments field.
    4. Click Apply.
    5. Click Save to save your changes to the master configuration.
    6. Stop and restart the application server.

    By default, the JVM unloads a class from memory whenever there are no live instances of that class left. The overhead of loading and unloading the same class multiple times, can decrease performance.

    Avoid trouble: You can use the -noclassgc argument to disable class garbage collection. However, the performance impact of class garbage collection is typically minimal, and turning off class garbage collection in a Java Platform, Enterprise Edition (Java EE) based system, with its heavy use of application class loaders, might effectively create a memory leak of class data, and cause the JVM to throw an Out-of-Memory Exception.

    If you use the -noclassgc argument, whenever you redeploy an application, you should always restart the application server to clear the classes and static data from the pervious version of the application.

    gotcha
    Default Class garbage collection is enabled.
    Recommended

    Do not disable class garbage collection.

    Usage Specify Xnoclassgc to disable class garbage collection.
  6. Tune the configuration update process for a large cell configuration.
    In a large cell configuration, you might have to determine whether configuration update performance or consistency checking is more important. When configuration consistency checking is turned on, a significant amount of time might be required to save a configuration change, or to deploy a several applications. The following factors influence how much time is required:
    • The more application servers or clusters that are defined in a cell, the longer it takes to save a configuration change.
    • The more applications that are deployed in a cell, the longer it takes to save a configuration change.
    If the amount of time required to change a configuration change is unsatisfactory, you can add the config_consistency_check custom property to your JVM settings and set the value of this property to false.
    1. In the administrative console, click System Administration > Deployment manager.
    2. In the Server Infrastructure section, click Java and process management > Process definition > Java virtual machine > Custom properties > New.
    3. Enter config_consistency_check in the Name field and false in the Value field.
    4. Click OK and then Save to apply these changes to the master configuration.
    5. Restart the server.

    If you are using the wsadmin command wsadmin -conntype none in local mode, you must set the config_consistency_check property to false before issuing this command.

What to do next

Continue to gather and analyze data as you make tuning changes until you are satisfied with how the JVM is performing. Refer to the IBM i Information Center for more general information about tuning the Classic JVM.

If your application experiences slow response times at startup, or at first touch, you might want to use the Java user classloader cache.




In this information ...


IBM Redbooks, demos, education, and more

(Index)

Use IBM Suggests to retrieve related content from ibm.com and beyond, identified for your convenience.

This feature requires Internet access.

Task topic    

Terms of Use | Feedback

Last updated: Oct 21, 2010 10:04:34 PM CDT
http://www14.software.ibm.com/webapp/wsbroker/redirect?version=compass&product=was-nd-mp&topic=tprf_tuneclassicjvm
File name: tprf_tuneclassicjvm.html