Network Deployment (Distributed operating systems), v8.0 > Tune performance > Tune Data access resources > Tune data


Tune connection pools


Overview

Use connection pools helps to both alleviate connection management overhead and decrease development tasks for data access. Each time an application attempts to access a backend store (such as a database), it requires resources to create, maintain, and release a connection to that datastore.

To mitigate the strain this process can place on overall application resources, the application server enables administrators to establish a pool of backend connections that applications can share on an application server. Connection pooling spreads the connection overhead across several user requests, thereby conserving application resources for future requests.

Connection pooling can improve the response time of any application that requires connections, especially Web-based applications. When you make a request over the web to a resource, the resource accesses a data source. Because you connect and disconnect frequently with applications on the Internet, the application requests for data access can surge to considerable volume. Consequently, the total overhead for a datastore can quickly become high for Web-based applications, causing performance to deteriorate. When connection pooling capabilities are used, however, web applications can realize performance improvements of up to 20 times the normal results.


Prevent a connection deadlock

Deadlock can occur if the application requires more than one concurrent connection per thread, and the database connection pool is not large enough for the number of threads. Suppose each of the application threads requires two concurrent database connections and the number of threads is equal to the maximum connection pool size. Deadlock can occur when both of the following are true:

To prevent the deadlock in this case, increase the maximum connections value for the database connection pool by at least one. This ensures that at least one of the waiting threads obtains a second database connection and avoids a deadlock scenario.

For general prevention of connection deadlock, code the applications to use only one connection per thread. If you code the application to require C concurrent database connections per thread, the connection pool must support at least the following number of connections, where T is the maximum number of threads:

The connection pool settings are directly related to the number of connections that the database server is configured to support. If you increase the maximum number of connections in the pool and the corresponding settings in the database are not increased accordingly, the application might fail. The resulting SQL exception errors are displayed in stderr.log.

One of the most common causes of connection deadlock is the use of the same connection pool by both servlets and by Enterprise JavaBeans (EJBs), and where the servlet directly or indirectly invokes the bean. For example, a servlet that obtains a JMS connection from the connection pool, sends a message to a Message Driven Bean (MDB) and waits for a reply. The MDB is configured to use the same connection pool as the servlet, therefore, another connection from the pool is required for the MDB to send a reply to the servlet. Servlets and enterprise beans do not share the same connection pool. This is a classic case of concurrent (C) threads, where C=2 and T is the maximum size of the servlet and EJB thread pools.


Disable connection pooling


Enable deferred enlistment.

In the application server environment, deferred enlistment refers to the technique in which the application server waits until the connection is used before the connection is enlisted in the application's unit of work (UOW) scope.

Consider the following illustration of deferred enlistment:

Given the same scenario, but the application component does not use deferred enlistment, the component container immediately enlists the connection in the transaction. Thus the application server incurs, for no purpose, an additional load of all of the overhead associated with that transaction. For XA connections, this overhead includes the two phase commit (2PC) protocol to the resource manager.

Deferred enlistment offers better performance in the case where a connection is obtained, but not used, within the UOW scope. The technique saves the cost of transaction participation until the UOW in which participation must occur.

Check with your resource adapter provider if you need to know if the resource adapter provides this functionality. The application server relational resource adapter automatically supports deferred enlistment.

Incorporating deferred enlistment in your code:

The Java EE Connector Architecture (JCA) v1.5 and later specification calls the deferred enlistment technique lazy transaction enlistment optimization. This support comes through a marker interface (LazyEnlistableManagedConnection) and a method on the connection manager (LazyEnlistableConnectionManager()):

package javax.resource.spi;
import javax.resource.ResourceException;
import javax.transaction.xa.Xid;
interface LazyEnlistableConnectionManager
{
    // application server
    void lazyEnlist(ManagedConnection) throws ResourceException;
    }
    interface LazyEnlistableManagedConnection
    {
        // resource adapter
    }


Control connection pool sharing

You can use the defaultConnectionTypeOverride, or globalConnectionTypeOverride connection pool custom property for a particular connection factory or data source to control connection sharing.

The defaultConnectionTypeOverride property changes the default sharing value for a connection pool. This property enables you to control connection sharing for direct look-ups. If resource references are configured for this data source or connection factory the resource reference's configurations take precedence over the defaultConnectionTypeOverride property settings. For example, if an application is doing direct look-ups and unshared connections are needed, set the defaultConnectionTypeOverride property to unshared.

The value specified for the globalConnectionTypeOverride custom property takes precedence over all of the other connection sharing settings. For example if you set this property to unshared, all connection requests are unshared for both direct look-ups and resource reference lookups. This property provides you with a quick way to test the consequences of moving all connections for a particular data source or connection factory to unshared or shared without changing any resource reference setting.

If you specify values for both the defaultConnectionTypeOverride and the globalConnectionTypeOverride properties, only the values specified for the globalConnectionTypeOverride property are used to determine connection sharing type.

The following is an example of how to set these properties for a particular data source:

To add these new custom properties to the settings for a data source or connection factory connection pool, a new connection pool custom property must be created.

To add one of these properties to a data source, use the administrative console. Click...

For other J2C or JMS connection factories, navigate to the connection factory definition in the administrative console. Then select...

In the Name field, specify either...

...and in the value field either...

The properties must be set in the Connection pool custom properties and NOT the general Custom propeties on the data source or connection factory.
Connection pooling
Transaction type and connection behavior
Configure Java EE Connector connection factories in the administrative console
Disable statement pooling
Configure a data source


Related


Connection pool settings
Connection and connection pool statistics
Connection pool custom properties

+

Search Tips   |   Advanced Search