Queuing and clustering considerations
Cloning applications servers to create a cluster can be a valuable asset in configuring highly scalable production environments, especially when the application is experiencing bottlenecks that are preventing full CPU utilization of symmetric multiprocessing (SMP) servers.
When adjusting the WAS system queues in clustered configurations, remember that when a server is added to a cluster, the server downstream receives twice the load as shown in Figure 2-1.
Figure 2-1 Clustering and queuing
Two servlet engines are located between a Web server and a data source. It is assumed that the Web server, servlet engines, and data source, but not the database, are all running on a single SMP server. Given these constraints, the following queue considerations must be made:
Double the Web server queue settings to ensure that ample work is distributed to each Web container. Reduce the Web container thread pools to avoid saturating a system resource like CPU or another resource that the servlets are using. Reduce the data source connection pool size to avoid saturating the database server. Reduce Java heap parameters for each instance of the application server. For versions of the Java virtual machine (JVM) shipped with WAS, it is crucial that the heap from all JVMs remain in physical memory. For example, if a cluster of four JVMs is running on a system, enough physical memory must be available for all four heaps.
Important: When creating a cluster, it is possible to select an existing application server to use as a template for the cluster without adding that application server into the new cluster (the chosen application server is used only as a template, and is not affected in any way by the cluster creation). All other cluster members are then created based on the configuration of the first cluster member.
Cluster members can be added to a cluster in various ways, during cluster creation and afterwards. During cluster creation, one existing application server can be added to the cluster or one or more new application servers can be created and added to the cluster. There is also the possibility of adding additional members to an existing cluster later on. Depending on the capacity of your systems, you can define different weights for the various cluster members.
Cluster members are required to have identical application components, but they can be sized differently in terms of weight, heap size, and other environmental factors. This concept allows clusters to span across a heterogeneous environment, including multiple LPARs. (Do not, however, change anything that might result in different application behavior on each cluster member.)
Starting or stopping the cluster starts or stops all cluster members automatically, and changes to the application are propagated to all application servers in the cluster.
Figure 2-2 shows an example of a possible configuration that includes server clusters. Server Cluster 1 has two cluster members on node B only. Server Cluster 2, which is completely independent of Server Cluster 1, has two cluster members on node A and three cluster members on node B. Finally, node A also contains a free-standing application server that is not a member of any cluster.
Figure 2-2 Server clusters and cluster members
For customers who plan to have high workload environments, it is important to plan how the cluster will operate both under normal conditions and under failure conditions.
Recommendation: The highly available manager (HAManager) monitors many cluster-wide resources. In general, this takes a certain amount of performance. If the cluster members are paging or otherwise engaged so that HAManager functionality cannot operate effectively, then HA-managed events will begin to occur to account for perceived cluster member anomalies.
For this reason, we recommend that you do not put the application servers under a large load in normal cases, in order to better handle spikes at times when challenges arise. In addition, reducing virtual memory paging as much as possible will result in a more reliable cluster operational environment.
When performing capacity planning for an LPAR, give consideration to the memory and CPU demands of the application. For example, applications that are CPU-bound should have the LPAR provisioned for ample processor units in order for it to meet its peak demand.