Fix (APAR): PK31107 Status: Fix Release: 6.0.2.9,6.0.2.8,6.0.2.7,6.0.2.6,6.0.2.5,6.0.2.4,6.0.2.3,6.0.2.15,6.0.2.13,6.0.2.11 Operating System: AIX,HP-UX,i5/OS,Linux,Linux pSeries,Linux Red Hat - pSeries,Linux zSeries,OS/390,OS/400,Solaris,Windows Supersedes Fixes: CMVC Defect: xxxxxx Byte size of APAR: 18399 Date: 2006-11-03 Abstract: The thread pool for the schduler will reject work even if the pool is not full. Description/symptom of problem: PK31107 resolves the following problem: ERROR DESCRIPTION: The problem is that the WorkManager thread pool does not seem to be working as it should. The exceptions logged are occurring even though the actual number of WorkManager threads is significantly below the configured maximum thread pool size. A WorkManager manages the AsynchBean instances. An AsynchBean uses a WorkManager to allocate the threads that they use to run. The problem that occurs is that the number of threads configured never gets realized, which limits the number of instances of the AsynchBeans that use the WorkManager. This can lead to performance problems and to unexpected exceptions being returned to the AsynchBean. . The exceptions that are returned are like this WorkRejectedException. The "com.ibm.websphere.asynchbeans.WorkRejectedException: errorCode: 3" was Caused by: com.ibm.ws.util.ThreadPool$ThreadPoolQueueIsFullException at com.ibm.ws.util.ThreadPool.execute(ThreadPool.java:1129) at com.ibm.ws.util.ThreadPool.execute(ThreadPool.java:1018) at com.ibm.ws.asynchbeans.WorkItemImpl$PoolExecuteProxy.run (WorkItemImpl.java:197) at com.ibm.ws.asynchbeans.WorkItemImpl.executeOnPool (WorkItemImpl.java:211) . Here is another typical java stack trace from this problem com.ibm.websphere.asynchbeans.WorkRejectedException: errorCode: 3 com.ibm.ws.util.ThreadPool$ThreadPoolQueueIsFullException at com.ibm.ws.asynchbeans.WorkItemImpl.executeOnPool (WorkItemImpl.java:218) at com.ibm.ws.asynchbeans.WorkManagerImpl. queueWorkItemForDispatch(WorkManagerImpl.java:379) at com.ibm.ws.asynchbeans.WorkManagerImpl.startWork (WorkManagerImpl.java:353) at com.ibm.ws.asynchbeans.WorkManagerImpl.startWork (WorkManagerImpl.java:471) at com.ibm.ws.asynchbeans.WorkManagerImpl.startWork (WorkManagerImpl.java:483) at cpsutil.ThreadPoolImplAsynchBean.(ThreadPoolImplAsynch Bean.java:166) . The configuration of the WorkManager is in the resources-pme.xml file. This is an example configuration of a WorkManager - . There are other parameters on the configuration panel for an AsynchBean that may be reflected in the configuration. These are for the Work Request Queue Size, the Work Timeout, and the Work request queue full action. These would be in the factories tag. For example, workReqQFullAction or workReqQSize. . There are also custom properties which can be used to set the same parameters. These are deprecated in WAS 6.0, but if they are set, they may prevent this APAR fix from being in effect. The custom properties are stored in the configuration using resourceProperties tag, such as the following: - . The problem is in how WebSphere AppServer implements the statement in the Info center that says: If you do not specify a value or the value is 0, the queue size is managed automatically. . The work request queue size was being set to a size which was too small in many cases, which led to the work request queue being full. The work request queue is used to feed the WorkManager threads. It is where the request is stored temporarily waiting for a thread from the thread pool to be created or allocated from the pool. The Work manager queue full is due to this request queue filling up waiting for a thread. The WorkManager config shown is for a Portal Server AsynchBean that uses wpsWorkManager. The MaxThreads for the WorkManager is 300. A javacore taken when the problem occurred shows only 8 wpsWorkManager threads. The problem is in the work request queue that feeds the threads. One of the cases where the problem is occurring is in the WebSphere Portal Server parallel portlet rendering. For more information on the WebSphere Portal Server parallel portlet rendering, see the Portal Server Info Center. The problem that is seen by Portal Server customers is a performance problem. Portlet rendering does not take place in parallel as configured but sequentially, which makes the page rendering slow. . Finally, the fix for this problem will not take effect if the custom property WORK_REQUEST_QUEUE_SIZE is set. If this property is set, it will be the queue size that is used. You will also see ASYN0065W messages indicating that this parameter is deprecated. . LOCAL FIX: Apply this APAR or set the work request queue to a default size (the suggested default size is 1/2 the max number of threads) PROBLEM SUMMARY USERS AFFECTED: User of Websphere Application Server version 6.0.2 PROBLEM DESCRIPTION: The thread pool for the schduler will reject work even if the pool is not full. RECOMMENDATION: None In version 6.0.2 the thread pool does not dynamicaly grow as needed when the default value is used for the work request queue size. This in turns affects the number of threads that can be submited to the thread pool, and causes a work rejected exception that states the thread pool is full even though it is not. PROBLEM CONCLUSION: Added code to ensure that the work request queue size is set correctly when it is left to the default value. This problem only occurs when the queue size is left to default not when a queue size is explicitly set. The fix for this APAR is currently targeted for inclusion in fixpack 6.0.2.17. Please refer to the recommended updates page for delivery information: http://www.ibm.com/support/docview.wss?rs=180&uid=swg27004980 Directions to apply fix: NOTE: Choose the: 1) Release the fix applies to 2) The Editions that apply 3) Delete the Editions & Methods that do not apply and this Note Fix applies to Editions: Release 6.0 __ Application Server (Express or BASE) __ Network Deployment (ND) __ WebSphere Business Integration Server Foundation (WBISF) __ Edge Components __ Developer __ Extended Deployment (XD) Install Fix to: Method: __ Application Server Nodes __ Deployment Manager Nodes __ Both NOTE: The user must: * Have Administrative rights in Windows, or be the Actual Root User in a UNIX environments. * Logged in with the same authority level when unpacking a fix, fix pack or refresh pack. * Be at V6.0.2.2 or newer of the Update Installer. This can be checked by reviewing the level of the Update Installer in file /updateinstaller/version.txt. The Update Installer can be downloaded from the following link: http://www.ibm.com/support/docview.wss?rs=180&uid=swg21205991 For detailed instructions to Extract the Update Installer see the following Technote: http://www-1.ibm.com/support/docview.wss?rs=180&uid=swg21205400 1) Copy PKxxxxx.pak file directly to the maintenance directory 2) Shutdown WebSphere Manually execute setupCmdLine.bat in Windows or . ./setupCmdLine.sh in Unix from the WebSphere instance that maintenance is being applied to. 3) Launch Update Installer 4) Enter the installation location of the WebSphere product you want to update. 5) Select the "Install maintenance package" operation. 6) Enter the file name of the maintenance package to install (PKxxxxx.pak file which was copied in the maintenance directory). 7) Install the maintenance package. 8) Restart WebSphere Directions to remove fix: NOTE: * The user must have Administrative rights in Windows, or be the Actual Root User in a UNIX environments. * FIXES MUST BE REMOVED IN THE ORDER THEY WERE APPLIED * DO NOT REMOVE A FIX UNLESS ALL FIXES APPLIED AFTER IT HAVE FIRST BEEN REMOVED * YOU MAY REAPPLY ANY REMOVED FIX Example: If your system has fix1, fix2, and fix3 applied in that order and fix2 is to be removed, fix3 must be removed first, fix2 removed, and fix3 re-applied. 1) Shutdown WebSphere Manually execute setupCmdLine.bat in Windows or . ./setupCmdLine.sh in Unix from the WebSphere instance that uninstall is being run against. 2) Start Update Installer 3) Enter the installation location of the WebSphere product you want to remove the fix. 4) Select "Uninstall maintenance package" operation. 5) Enter the file name of the maintenance package to uninstall (PKxxxxx.pak). 6) UnInstall maintenance package. 7) Restart WebSphere Directions to re-apply fix: 1) Shutdown WebSphere. 2) Follow the Fix instructions to apply the fix. 3) Restart WebSphere. Additional Information: