This configuration consists of multiple messaging engines running in a cluster, with each messaging engine restricted to running on one particular server.
This configuration can be achieved by adding a cluster to a service integration bus. This automatically creates one messaging engine; you then add to the cluster any further messaging engines that you require. Because no failover is required, you need to configure policies which restrict each messaging engine to a particular server.
This type of deployment provides workload sharing through the partitioning of destinations across multiple messaging engines. There is no failover capability as each messaging engine can only run on one server. The impact of a failure is lower than in a simple deployment because if one of the servers or messaging engines in the cluster fails, the remaining messaging engines will still have operational destination partitions. However, messages being handled by a messaging engine in a failed server are unavailable until the server can be restarted.
The diagram below shows a workload sharing configuration in which there are two messaging engines, ME-A and ME-B, with data stores DS-A and DS-B, running in a cluster of two servers and sharing the traffic passing through the destination. When Server-2 fails, ME-B no longer runs as it was restricted to that server. However ME-A continues to run and will now handle all new traffic through the destination.
For more information about sharing workload between messaging engines, see Workload sharing.