There are several network topologies, clustered and not
clustered, that allow WebSphere® Application Server to
interoperate with WebSphere MQ
by using WebSphere MQ
as an external JMS messaging provider. For providing high availability,
some topologies are more suitable than others.
The WebSphere Application
Server high availability framework eliminates single points of failure
and provides peer to peer failover for applications and processes
running within WebSphere Application Server.
This framework also allows integration of WebSphere Application Server into an environment
that uses other high availability frameworks, such as High Availability
Cluster Multi-Processing (HACMP™),
in order to manage non-WebSphere Application Server resources.
The following examples show the main network topologies for interoperating
with WebSphere MQ using
the WebSphere MQ messaging
provider:
WebSphere Application Server application server
is not clustered and WebSphere MQ
queue manager is not clustered
There are two topology options:
- The WebSphere Application Server application
server and the WebSphere MQ
queue manager run on different hosts
- The WebSphere MQ transport
type for the connection is specified as "client", that is, a TCP/IP
network connection is used to communicate with the WebSphere MQ queue manager. Client mode
is also known as "socket attach".
The following figure shows a WebSphere Application Server application server
and a WebSphere MQ queue
manager running on different hosts.
This topology is vulnerable because interoperation ceases
if any of the following conditions occurs:
- The WebSphere Application Server application
server fails.
- The host on which the WebSphere Application Server application server
is running fails.
- The WebSphere MQ queue
manager fails.
- The host on which the WebSphere MQ
queue manager is running fails.
You can improve availability for this topology by using,
for example, HACMP to restart
the failed component automatically.
- The WebSphere Application Server application
server and the WebSphere MQ
queue manager run on the same host
- The WebSphere MQ transport
type for the connection is specified as "bindings", that is, a cross-memory
connection is established to a queue manager running on the same host.
Bindings mode is also known as "call attach".
The following figure
shows a WebSphere Application Server application
server and a WebSphere MQ
queue manager running on the same host.
The availability constraints for this topology are similar
to the previous one. However, in some configurations bindings mode
is faster and more processor efficient than client mode because the
amount of processing is reduced.
WebSphere Application Server application servers
are clustered but WebSphere MQ
queue manager is not clustered
There are two topology options:
- The WebSphere MQ
queue manager runs on a different host from any of the WebSphere Application Server application servers
The WebSphere MQ
transport type for each connection is specified as "client".
In
the following figure:
- WebSphere Application Server application
servers 1 and 3 are running on Host 1
- WebSphere Application Server application
server 2 is running on Host 2
- WebSphere MQ queue
manager is running on Host 3
- If any clustered WebSphere Application Server application
server fails, or the host on which it is running fails, the remaining
application servers in the cluster can take over its workload.
- If the WebSphere MQ
queue manager fails, or the host on which it is running fails, interoperation
ceases.
You can improve availability for this topology by using, for
example, HACMP to restart the
failed queue manager automatically.
- The WebSphere Application Server application
servers run on several hosts, one of which hosts a WebSphere MQ queue manager
For WebSphere Application Server application
servers that are running on the same host as the WebSphere MQ queue manager, the WebSphere MQ transport type
for the connection is specified as "bindings then client" mode, that
is, if an attempt at a bindings mode connection to the queue manager
fails, a client mode connection is made. For WebSphere Application Server application servers
that are not running on the same host as the WebSphere MQ queue manager, the application
server automatically uses client mode.
The following figure
shows some WebSphere Application Server application
servers that are running on the same host as the WebSphere MQ queue manager. Other application
servers in the same WebSphere Application Server cluster
run on a different host.
In the following figure:
- WebSphere Application Server application
servers 1 and 3 are running on Host 1.
- WebSphere Application Server application
server 2 is running on Host 2.
- WebSphere MQ queue
manager is running on Host 1.
WebSphere Application Server application servers
are clustered and WebSphere MQ
queue managers are clustered
WebSphere MQ queue managers are usually
clustered in order to distribute the message workload and because,
if one queue manager fails, the others can continue running.
In
a WebSphere MQ cluster,
one or more cluster queues are shared by all the queue managers in
the cluster, and each cluster queue in a cluster must be given the
same name. Other queue managers in the cluster distribute messages
between the cluster queues in a way that achieves workload balancing.
If
you use two-phase commit transactions, each WebSphere Application Server application server
must always reconnect to the same WebSphere MQ
queue manager to resolve indoubt units of work.
There are two
topology options:
- The WebSphere MQ queue
managers run on different hosts from the WebSphere Application Server application servers
- In the following figure:
- WebSphere Application Serverapplication
servers 1 and 3 are running on Host 1.
- WebSphere Application Server application
server 2 is running on Host 2.
- WebSphere Application Server application
servers 1 and 2 attach in client mode to WebSphere MQ queue manager 1, which is
running on Host 3.
- WebSphere Application Server application
server 3 attaches in client mode to WebSphere MQ queue manager 2, which is
running on Host 4.
- Queue managers 1, 2 and 3 are part of the same WebSphere MQ cluster. Queue manager 1 is
running on Host 3, queue manager 2 is running on Host 4, and queue
manager 3 is running on Host 5.
Queue manager 3 is responsible for distributing messages between
the cluster queues in a way that achieves workload balancing.
- If WebSphere Application Server application
server 1 or 2 fails:
- The other WebSphere Application Server application
server can take over both of their workloads because they are both
attached to queue manager 1.
- If WebSphere Application Server application
server 3 fails:
- Restart it as soon as possible for the following reasons:
- Other WebSphere Application Server application
servers in the cluster can take over its external workload, but no
other application server can take over its WebSphere MQ workload, because no other
application server is attached to queue manager 2. The workload that
was generated by application server 3 ceases.
- WebSphere MQ continues
to distribute work between queue manager 1 and queue manager 2, even
though the workload arriving at queue manager 2 cannot be consumed
by application server 1 or 2. You can alleviate this situation by
manually configuring Q1 on queue manager 2 so that the ability to
put messages to it is inhibited. This results in all messages being
sent to queue manager 1 where they are processed by the other application
servers.
- If queue manager 1 fails:
- Messages that are on queue manager 1 when it fails are not processed
until you restart queue manager 1.
- No new messages from WebSphere MQ
applications are sent to queue manager 1, instead new messages are
sent to queue manager 2 and consumed by WebSphere Application Server application server
3.
- Because WebSphere Application Server application
servers 1 and 2 are not attached to queue manager 2, they cannot take
on any of its workload.
- Because WebSphere Application Server application
servers 1, 2 and 3 are in the same WebSphere Application Server cluster, their
non-WebSphere MQ workload continues to be distributed between them
all, even though application servers 1 and 2 cannot use WebSphere MQ because queue manager 1 has
failed.
- To contain this situation, restart queue manager 1 as soon as
possible.
Although this networking topology can provide availability
and scalability, the relationship between the workload on different
queue managers and the WebSphere Application Server application
servers to which they are connected is complex. You can contact your IBM® representative to obtain expert
advice.
- The WebSphere MQ queue
managers run on the same hosts as the WebSphere Application Server application servers
- In the following figure:
- WebSphere Application Server application
servers 1 and 3 are running on Host 1, and they attach to WebSphere MQ queue manager
1 in bindings mode.
- WebSphere Application Server application
server 2 is running on Host 2, and attaches to WebSphere MQ queue manager 2 in bindings
mode.
- Queue managers 1, 2 and 3 are part of the same WebSphere MQ cluster. Queue manager 1 is
running on Host 1, queue manager 2 is running on Host 2, and queue
manager 3 is running on Host 3.
Queue manager 3 is responsible for distributing messages between
the cluster queues in a way that achieves workload balancing.
- If WebSphere Application Server application
server 1 or 3 fails:
- The other application server can take over both of their workloads
because they are both attached to queue manager 1.
- If WebSphere Application Server application
server 2 fails:
- Restart it as soon as possible for the following reasons:
- Other application servers in the cluster can take over its external
workload, but no other application server can take over its WebSphere MQ workload, because
no other application server is attached to queue manager 2. The workload
that was generated by WebSphere Application Server application
server 2 ceases.
- WebSphere MQ continues
to distribute work between queue manager 1 and queue manager 2, even
though the workload arriving at queue manager 2 cannot be taken on
by application server 2. You can alleviate this situation by manually
configuring Q1 on queue manager 2 so that the ability to put messages
to it is inhibited. This results in all messages being sent to queue
manager 1 where they are processed by the other application servers.
- If queue manager 1 fails:
- Messages that are on queue manager 1 when it fails are not processed
until you restart queue manager 1.
- No new messages from WebSphere MQ
applications are sent to queue manager 1, instead new messages are
sent to queue manager 2 and consumed by WebSphere Application Server application server
2.
- Because WebSphere Application Server application
servers 1 and 3 are not attached to queue manager 2, they cannot take
on any of its workload.
- Because WebSphere Application Server application
servers 1, 2 and 3 are in the same WebSphere Application Server cluster, their
non-WebSphere MQ workload continues to be distributed between them
all, even though WebSphere Application Server application
servers 1 and 3 cannot use WebSphere MQ
because queue manager 1 has failed.
- To contain this situation, restart queue manager 1 as soon as
possible.
Although this networking topology can provide
availability and scalability, the relationship between the workload
on different queue managers and the WebSphere Application Server application servers
with which they are connected is complex. You can contact your IBM representative to obtain expert
advice.
WebSphere Application Server application servers
connect to WebSphere MQ for z/OS with queue-sharing
groups
On z/OS systems,
a WebSphere Application Server application
server can connect to a queue manager that is a member of a WebSphere MQ for z/OS queue-sharing group. You
can configure the connection so that it selects a specific named queue
manager, or you can configure it to accept any queue manager in the
queue-sharing group.
If you configure a connection to select
a specific named queue manager, your options for providing high availability
are like those for connecting to WebSphere MQ on
other platforms. However, if you configure the connection to accept
any queue manager in the queue-sharing group, you can improve availability.
In this situation, when the WebSphere Application Server application server
reconnects following a WebSphere MQ queue
manager failure, the application server can accept connection to a
different queue manager that has not failed.
A connection that
you configure to accept any queue manager must only be used to access
shared queues. A shared queue is a single queue that all queue managers
in the queue-sharing group can access. It does not matter which queue
manager an application uses to access a shared queue. Even if the
same application instance uses different queue managers to access
the same shared queue, this always produces consistent results.
These
examples show two topology options for connecting to
WebSphere MQ for z/OS to benefit from queue-sharing
groups:
- The WebSphere Application Server application
servers and the WebSphere MQ
queue managers run in the same logical partition (LPAR)
- In the following figure:
- WebSphere Application Server application
server 1 is running in LPAR 1, and it attaches in bindings mode to WebSphere MQ queue manager
1, which is also running in LPAR 1.
- WebSphere Application Server application
server 2 is running in LPAR 2, and it attaches in bindings mode to WebSphere MQ queue manager
2, which is also running in LPAR 2. The two application servers are
part of a WebSphere Application Server cluster.
- Queue managers 1 and 2 are members of a WebSphere MQ queue-sharing group that hosts
a shared queue, Q1. The shared queue is located in a coupling facility.
This networking topology can benefit from "pull" workload
balancing if several application instances, including instances running
in different LPARs, are processing messages from the same shared queue.
You
can improve availability for this topology by using the z/OS Automatic Restart Manager (ARM) to restart
failed application servers or queue managers. ARM can restart an application
server in a different LPAR, where the application server can connect
to a running queue manager, instead of waiting for a restart of the
queue manager that it was using previously. In the example used here,
ARM can restart WebSphere Application Server application
server 1 in LPAR 2, where it can connect to WebSphere MQ queue manager 2, instead of
waiting for queue manager 1 to restart.
- The WebSphere Application Server application
servers and the WebSphere MQ
queue managers run in different logical partitions (LPARs)
- In the following figure:
- WebSphere MQ queue
managers 1 and 2 are members of a WebSphere MQ
queue-sharing group that hosts a shared queue, Q1. The shared queue
is located in a coupling facility. The two queue managers run in different
LPARs.
- Several WebSphere Application Server application
servers have a client mode (TCP/IP) connection to the WebSphere MQ queue managers. All the client
mode connections are managed by the z/OS sysplex
distributor, which selects either queue manager 1 or queue manager
2 for each connection request.
As with the bindings mode connection example, this networking
topology can benefit from "pull" workload balancing if several application
instances running in the same or different WebSphere Application Server application servers
are processing messages from the same shared queue.
The use
of the z/OS sysplex distributor
improves availability for this networking topology. If one of the WebSphere MQ queue managers
fails, the z/OS sysplex distributor
can connect applications running in the WebSphere Application Server application servers
to the other queue manager, without waiting for the failed queue manager
to restart. In the example used here, if WebSphere MQ queue manager 1 fails, the z/OS sysplex distributor can select
queue manager 2 for every connection request, until queue manager
1 restarts.
In this networking topology, WebSphere MQ for z/OS GROUP units of recovery must be enabled
on all the queue managers in the queue-sharing group. TCP/IP (client
mode) connections that accept any queue manager use GROUP units of
recovery. GROUP units of recovery are not supported by versions of WebSphere MQ for z/OS earlier than version 7.0.1. Bindings mode
connections do not require GROUP units of recovery.