Multiple-server SCA.SYSTEM bus with clustering

In a deployment manager cell, the SCA.SYSTEM bus can consist of multiple servers, some or all of which are members of server clusters.

When you configure a server bus member, that server runs a messaging engine. For many purposes this is sufficient, but such a messaging engine can only run in the server it was created for. The server is therefore a single point of failure; if the server cannot run, the messaging engine is unavailable. By configuring a cluster bus member instead, the messaging engine has the ability to run in one server in the cluster, and if that server fails, the messaging engine can run in an alternative server. This is illustrated in Figure 1.

Another advantage of configuring a cluster bus member is the ability to share the workload associated with an SCA module across multiple servers. For an SCA module deployed to a cluster bus member, the queue destinations used are partitioned across the set of messaging engines run by the cluster servers. The messaging engines in the cluster each handle a share of the messages passing through the SCA module.

To summarize, with a cluster bus member you can achieve either failover, workload sharing, or both, depending on policies that you can configure.

Figure 1. Service integration bus with clustered server for failover
A service integration bus with a single member - a cluster  bus member. The figure illustrates the scenario where a messaging engine has the ability to run in one server in the cluster, and if that server fails, the messaging engine can run in an alternative server.
Figure 2. Service integration bus with clustered server for workload sharing
A service integration bus with a single member - a cluster  bus member. The figure illustrates the scenario where each server in the cluster runs a messaging engine. A bus destination is partitioned across the messaging engine running in the cluster member.

Terms of use | | Broken links

Last updated: Tue Feb 21 17:19:13 2006

(c) Copyright IBM Corporation 2005.
This information center is powered by Eclipse technology (http://www.eclipse.org)