2.1.2 Workload management

Workload management is implemented in WAS Network Deployment V6 by using application server clusters and cluster members. These cluster members can all reside on a single node (LPAR), or be distributed across multiple nodes (LPARs).

When using clustered WASs, your clients can be redirected either automatically or manually (depending on the nature of the failure) to another healthy server in the case of a failure of a clustered application server. Workload management is the WebSphere facility to provide load balancing and affinity between application servers in a WebSphere clustered environment. It optimizes the distribution of processing tasks in the WAS environment; incoming work requests are distributed to the application servers that can most effectively process the requests.

Workload management is also a procedure for improving performance, scalability, and reliability of an application. It provides failover when servers are not available. WebSphere uses workload management to send requests to alternate members of the cluster. WebSphere also routes concurrent requests from a user to the application server that serviced the first request, as EJB calls, and session state will be in memory of this application server.

Workload management is most effective when the deployment topology is comprised of application servers on multiple LPARs, because such a topology provides both failover and improved scalability. It can also be used to improve scalability in topologies where a system is comprised of multiple servers on a single, high capacity machine. In either case, it enables the system to make the most effective use of the available computing resources.

Two types of requests, HTTP requests and EJB requests, can be workload managed in IBM WAS Network Deployment V6, as explained here:

  • HTTP requests can be distributed across multiple Web containers.

    When an HTTP request reaches the Web server, a decision must be made. Some requests for static content might be handled by the Web server. Requests for dynamic content or some static content will be passed to a Web container running in an application server. Whether the request should be handled or passed to WebSphere is decided by the Web server plug-in, which runs in-process with the Web server. We refer to this as WLM Plug-in. For these WebSphere requests, high availability for the Web container becomes an important piece of the failover solution.

  • EJB requests can be distributed across multiple EJB containers.

    When an EJB client makes calls from the Web container or client container or from outside, the request is handled by the EJB container in one of the clustered application servers. If that server fails, the client request is redirected to another available server. We refer to this as EJS WLM.

    An application deployed to a cluster runs on all cluster members concurrently. The workload is distributed based on weights that are assigned to each cluster member. Thus, more powerful LPARs receive more requests than smaller systems. When deciding on a topology, it is important to assign the appropriate weighting for each individual LPAR if they are different. Alternatively, it is common to use similar LPARs with the similar capacity.

    Workload management also takes care of failing over existing client requests to other, still available application servers, and of directing new requests only to available processes if an application server in the cluster should fail. In addition, workload management enables servers to be transparently maintained and upgraded while applications remain available for users. You can add additional cluster members to a cluster at any point, providing scalability and performance if an existing environment is not able to handle the workload any more.