Introduction: Clusters

Clusters are groups of servers that are managed together and participate in workload management. A cluster can contain nodes or individual application servers. A node is usually a physical computer system with a distinct host IP address that is running one or more application servers. Clusters can be grouped under the configuration of a cell, which logically associates many servers and clusters with different configurations and applications with one another depending on the discretion of the administrator and what makes sense in their organizational environments.

Clusters are responsible for balancing workload among servers. Servers that are a part of a cluster are called cluster members. When you install an application on a cluster, the application is automatically installed on each cluster member. You can configure a cluster to provide workload balancing with service integration or with message driven beans in the application server.

[AIX Solaris HP-UX Linux Windows] Because each cluster member contains the same applications, you can distribute client tasks in distributed platforms according to the capacities of the different machines by assigning weights to each server.

[AIX Solaris HP-UX Linux Windows] In distributed platforms, assigning weights to the servers in a cluster improves performance and failover. Tasks are assigned to servers that have the capacity to perform the task operations. If one server is unavailable to perform the task, it is assigned to another cluster member. This reassignment capability has obvious advantages over running a single application server that can become overloaded if too many requests are made.

Cluster startup process options

New feature New feature: Normal runtime processing automatically starts all server components during the server startup process. This processing applies to all servers, including servers that are part of a cluster. However, you can configure servers, including servers that are cluster members, such that not all of the server components start during the server startup process. This capability enables the server to consume resources as needed, thereby providing a smaller and more manageable footprint, and normally results in a performance improvement.newfeat

When you configure cluster members such that not all of the cluster member components start when the cluster or a specific cluster member is started, the cluster member components are dynamically started as they are needed. For example, if an application module starts that requires a specific server component, that component is dynamically started.

Clusters and node groups

Any application you install to a cluster must be able to execute on any application server that is a member of that cluster. Because a node group forms the boundaries for a cluster, all of the members of a cluster must be members of the same node group. Therefore, for the application you deploy to run successfully, all of the members of a cluster must be located on nodes that meet the requirements for that application.

In a cell that has many different server configurations, it might be difficult to determine which nodes have the capabilities to host your application. A node group can be used to define groups of nodes that have enough in common to host members of a given cluster. All cluster members in a cluster must be in the same node group.

All nodes are members of at least one node group. When you create a cluster, the first application server you add to the cluster defines the node group within which all of the other cluster members must reside. All other cluster members you add to the cluster can only be on nodes that are members of this same node group. When you create a new cluster member in the administrative console, you are allowed to create the application server on a node that is a member of the node group for that cluster only.

Nodes can be members of multiple node groups. If the first cluster member you add to a cluster has multiple node groups defined, the system automatically chooses the node group that bounds the cluster. You can change the node group by modifying the cluster settings. Use the Server cluster settings page to change the node group.

Clusters and core groups

In a high availability environment, a group of clusters can be defined as a core group. All of the application servers defined as a member of one of the clusters included in a core group are automatically members of that core group. Individual application servers that are not members of a cluster can also be defined as a member of a core group. The use of core groups enables WebSphere® Application Server to provide high availability for applications that must always be available to end users. You can also configure core groups to communicate with each other using the core group bridge. The core groups can communicate within the same cell or across cells.

Cluster members

New feature New feature: You can improve system performance if you configure each cluster member, such that each of their components are dynamically started as they are needed instead of letting all of these components automatically start when the cluster member starts. Selecting this option can improve cluster startup time, and reduce the memory footprint of the cluster members. Starting components as they are needed is most effective if all of the applications that are deployed on the cluster are of the same type. For example, using this option works better if all of your applications are Web applications that use servlets, and JavaServer Pages (JSP). This option works less effectively if your applications use servlets, JSPs and Enterprise JavaBeans (EJB).newfeat
Avoid trouble Avoid trouble: If you have clients running in an environment:
  • That includes Java thin clients,
  • Where requests are being routed between multiple cells, or
  • Where requests are being routed within a single cell that includes nodes from earlier versions of the product,
they might suddenly encounter a situation where the port information about the cluster members of the target cluster has become stale.

This situation most commonly occurs when all of the cluster members have dynamic ports and are restarted during a time period when no requests are being sent. The client process in this state will eventually attempt to route to the node agent to receive the new port data for the cluster members, and then use that new port data to route back to the members of the cluster.

If any issues occur that prevent the client from communicating with the node agent, or that prevent the new port data being propagated between the cluster members and the node agent, request failures might occur on the client. In some cases, these failures are temporary. In other cases you need to restart one or more processes to resolve a failure.

To circumvent the client routing problems that might arise in these cases, you can configure static ports on the cluster members. With static ports, the port data does not change as a client process gets information about the cluster members. Even if the cluster members are restarted, or there are communication or data propagation issues between processes, the port data the client holds is still valid. This circumvention does not necessarily solve the underlying communication or data propagation issues, but removes the symptoms of unexpected or uneven client routing decisions.

gotcha

Clusters and feature packs

All members of a cluster must have the same software installed so that they can host the same applications. If any of the applications that you are targeting for a cluster require the functionality of one of the product feature packs, you must install the same level of the feature pack on all of the cluster members before you deploy that application. You must also install the same level of the feature pack on any subsequent members before you add them to the cluster.

If you accidentally install a higher level of the feature pack to a cluster member, any application that you deploy on that cluster cannot use any of the functions that are not supported by the level of the feature pack that is installed on the first cluster member.




Related concepts
Clusters and workload management
Service integration high availability and workload sharing configurations
Core group communications using the core group bridge service
Introduction: Application servers
Related tasks
Balancing workloads
Setting up a high availability environment
Creating clusters
Viewing, configuring, creating, and deleting node groups
Configuring high availability and workload sharing of service integration
Concept topic Concept topic    

Terms and conditions for information centers | Feedback

Last updatedLast updated: Jun 11, 2013 8:40:09 AM CDT
http://www14.software.ibm.com/webapp/wsbroker/redirect?version=v701sca&product=was-nd-mp&topic=welcclusters
File name: welcclusters.html