Using connection pools helps to both alleviate connection
management overhead and decrease development tasks for data access.
Each time an application attempts to access a backend store (such
as a database), it requires resources to create, maintain, and release
a connection to that datastore. To mitigate the strain this process
can place on overall application resources, the application server
enables administrators to establish a pool of backend connections
that applications can share on an application server. Connection pooling
spreads the connection overhead across several user requests, thereby
conserving application resources for future requests.
- Prevent a connection deadlock. Deadlock
can occur if the application requires more than one concurrent connection
per thread, and the database connection pool is not large enough for
the number of threads. Suppose each of the application threads requires
two concurrent database connections and the number of threads is equal
to the maximum connection pool size. Deadlock can occur when both
of the following are true:
- Each thread has its first database connection, and all are in
use.
- Each thread is waiting for a second database connection, and none
would become available since all threads are blocked.
To prevent the deadlock in this case, increase the maximum
connections value for the database connection pool by at least one.
This ensures that at least one of the waiting threads obtains a second
database connection and avoids a deadlock scenario.
For general
prevention of connection deadlock, code your applications to use only
one connection per thread. If you code the application to require
C concurrent
database connections per thread, the connection pool must support
at least the following number of connections, where
T is
the maximum number of threads:
T * (C - 1) + 1
The
connection pool settings are directly related to the number of connections
that the database server is configured to support. If you increase
the maximum number of connections in the pool and the corresponding
settings in the database are not increased accordingly, the application
might fail. The resulting SQL exception errors are displayed in the
following locations:
- the SYSOUT of the servant
One of the most common causes of connection
deadlock is the use of the same connection pool by both servlets and
by Enterprise JavaBeans (EJBs),
and where the servlet directly or indirectly invokes the bean. For
example, a servlet that obtains a JMS connection from the connection
pool, sends a message to a Message Driven Bean (MDB) and waits for
a reply. The MDB is configured to use the same connection pool as
the servlet, therefore, another connection from the pool is required
for the MDB to send a reply to the servlet. Servlets and enterprise
beans do not share the same connection pool. This is a classic case
of concurrent (C) threads, where C=2 and T is the maximum size
of the servlet and EJB thread pools.
- Disable connection pooling.
- For relational resource adapters (RRAs), add the disableWASConnectionPooling custom
property for your data sources.
- Click JDBC > Data sources.
- Click on the name of the data source that you want to configure.
- Click Custom properties under the Additional
Properties heading.
- Click New.
- Complete the required fields with the following information:
- Name: disableWASConnectionPooling
- Value: true
- For other resource adapters, consult with the binding specifications
for that resource adapter to configure your applications to disable
connection pooling.
- Programmatically disable connection pooling through the resource
adapter.
- The application server leverages the following code to detect
the javax.resource.NotSupportedException exception and disable connection
pooling:
_managedFactory.matchManagedConnections(s,subject,cri); // 169059 174269 }
catch(javax.resource.NotSupportedException e){
- Enable deferred enlistment. In the application
server environment, deferred enlistment refers to the technique in
which the application server waits until the connection is used before
the connection is enlisted in the application's unit of work (UOW)
scope.
Consider the following illustration of deferred enlistment:
- An application component that uses deferred enlistment calls the getConnection method
from within a global transaction.
- The application component does not immediately use the connection.
- When the application issues the call for initial use of the connection,
the transaction manager intercepts the call.
- The transaction manager enlists the XA resource for the connection
and calls the XAResource.start method.
- The connection manager associated with the XA resource sends the
call to the database.
Given the same scenario, but the application component does not
use deferred enlistment, the component container immediately enlists
the connection in the transaction. Thus the application server incurs,
for no purpose, an additional load of all of the overhead associated
with that transaction. For XA connections, this overhead includes
the two phase commit (2PC) protocol to the resource manager.
Deferred
enlistment offers better performance in the case where a connection
is obtained, but not used, within the UOW scope. The technique saves
the cost of transaction participation until the UOW in which participation
must occur.
Check with your resource adapter provider if you
need to know if the resource adapter provides this functionality.
The application server relational resource adapter automatically supports
deferred enlistment.
Incorporating deferred enlistment in
your code:
The Java Platform,
Enterprise Edition (Java EE)
Connector Architecture (JCA) Version 1.5 and later specification calls
the deferred enlistment technique lazy transaction enlistment optimization.
This support comes through a marker interface (LazyEnlistableManagedConnection)
and a method on the connection manager (LazyEnlistableConnectionManager()):
package javax.resource.spi; import javax.resource.ResourceException; import
javax.transaction.xa.Xid; interface LazyEnlistableConnectionManager { // application server void
lazyEnlist(ManagedConnection) throws ResourceException; } interface LazyEnlistableManagedConnection
{ // resource adapter }
- Control connection pool sharing.
You
can use the defaultConnectionTypeOverride, or globalConnectionTypeOverride
connection pool custom property for a particular connection factory
or data source to control connection sharing.
The defaultConnectionTypeOverride
property changes the default sharing value for a connection pool.
This property enables you to control connection sharing for direct
look-ups. If resource references are configured for this data source
or connection factory the resource reference's configurations
take precedence over the defaultConnectionTypeOverride property settings.
For example, if an application is doing direct look-ups and unshared
connections are needed, set the defaultConnectionTypeOverride property
to unshared.
The value specified for the globalConnectionTypeOverride
custom property takes precedence over all of the other connection
sharing settings. For example if you set this property to unshared,
all connection requests are unshared for both direct look-ups and
resource reference lookups. This property provides you with a quick
way to test the consequences of moving all connections for a particular
data source or connection factory to unshared or shared without changing
any resource reference setting.
If you specify values for both
the defaultConnectionTypeOverride and the globalConnectionTypeOverride
properties, only the values specified for the globalConnectionTypeOverride
property are used to determine connection sharing type.
The
following is an example of how to set these properties for a particular
data source:
Create a new custom property on the connection
pool belonging to the data source in the administrative console: . Then specify defaultConnectionTypeOverride or globalConnectionTypeOverride in
the Name field, and shared or unshared in
the Value field.
- Automatically perform a mitigation action
if a connection pool cannot establish a connection to its configured
resource manager.
By using the failureNotificationActionCode
and failureThreshold properties, a connection pool can be configured
so that when a connection factory is unable to establish a connection
to its configured resource manager an autonomous mitigation action
is taken by the WebSphere® Application
Server for z/OS® runtime, to
minimize the end user impact of the failure. When a connection factory
is able to re-establish a connection to the configured resource manager,
the autonomous mitigation action is reversed. One example of this
type of mitigation action is that the runtime can issue a "pause listeners"
command for the server with the failed resource, preventing new work
from being accepted by the server. When combined with a high-availability
front end, new work can be routed to other servers in a cluster.
Notification
is sent to the z/OS runtime
when a particular connection factory or data source has reached a
specified or default failure threshold value. The failure notification
includes a configured action code that determines how the runtime
responds to the failure notification. See the topic, Connection pool
custom properties, for action code definitions.
To use these
properties, you must define them as new custom properties of the connection
pool. This can be done through the administrative console as follows:
Click . Then specify failureNotificationActionCode or failureNotification in
the Name field and the appropriate value in
the Value field.
To learn more about
the settings for these custom properties, see the topic, Connection
pool custom properties.