Connecting WebSphere application servers to IBM MQ for z/OS with queue-sharing groups

On z/OS® systems, an application server can connect to a queue manager that is a member of a WebSphere® MQ for z/OS queue-sharing group. You can configure the connection so that it selects a specific named queue manager, or you can configure it to accept any queue manager in the queue-sharing group.

Note: In this topic "application server" refers to an application server that is running on WebSphere Application Server and "queue manager" refers to a queue manager that is running on IBM MQ.

If you configure a connection to select a specific named queue manager, your options for providing high availability are like those for connecting to IBM MQ on other platforms. However, you can improve availability if you configure the connection to accept any queue manager in the queue-sharing group. In this situation, when the application server reconnects following a IBM MQ queue manager failure, the application server can accept connection to a different queue manager that has not failed.

A connection that you configure to accept any queue manager must only be used to access shared queues. A shared queue is a single queue that all queue managers in the queue-sharing group can access. It does not matter which queue manager an application uses to access a shared queue. Even if the same application instance uses different queue managers to access the same shared queue, this always produces consistent results.

These examples show two topology options for connecting to IBM MQ for z/OS to benefit from queue-sharing groups:
  • The application servers and the queue managers run in the same logical partition (LPAR)
  • The application servers and the queue managers run in different logical partitions (LPARs)

The application servers and the queue managers run in the same logical partition (LPAR)

The following figure shows a bindings mode connection from WebSphere Application Server to IBM MQ for z/OS. The figure shows the following configuration:

  • Application servers 1 and 2 are part of a WebSphere Application Server cluster.
  • Application server 1 is running in LPAR 1.
  • Application sever 2 is running in LPAR 2.
  • Queue managers 1 and 2 are members of a IBM MQ queue-sharing group that hosts a shared queue, Q1. The shared queue is located in a coupling facility.
  • Queue manager 1 is running in LPAR 1.
  • Queue manager 2 is running in LPAR 2.
  • A "bindings" connection is used when the application server and the queue manager are running on the same host. This is a cross-memory connection that is established to a queue manager running on the same host. A bindings connection is also known as "call attach".
    • Application server 1 and queue manager 1 are attached to each other in bindings mode.
    • Application server 2 and queue manager 2 are attached to each other in bindings mode.
Figure 1. WebSphere Application Server with bindings mode connection to IBM MQ for z/OS
WebSphere Application Server application server 1 is running in LPAR 1, and WebSphere Application Server application server 2 is running in LPAR 2. The two application servers are part of a WebSphere Application Server cluster. IBM MQ queue manager 1 is running in LPAR 1, and IBM MQ queue manager 2 is running in LPAR 2. The queue managers are members of a IBM MQ queue-sharing group that hosts a shared queue, Q1, located in a coupling facility. WebSphere Application Server application server 1 is connected to IBM MQ queue manager 1 in LPAR 1, and WebSphere Application Server application server 2 is connected to IBM MQ queue manager 2 in LPAR 2.

This networking topology can benefit from "pull" workload balancing if several application instances, including instances running in different LPARs, are processing messages from the same shared queue.

You can improve availability for this topology by using the z/OS Automatic Restart Manager (ARM) to restart failed application servers or queue managers. If a queue manager in an LPAR fails, ARM can restart an application server in a different LPAR, where the application server can connect to a running queue manager, instead of waiting for a restart of the queue manager that it was using previously. In the example used here, ARM can restart WebSphere Application Server application server 1 in LPAR 2, where it can connect to IBM MQ queue manager 2, instead of waiting for queue manager 1 to restart.

The application servers and the queue managers run in different logical partitions (LPARs)

The following figure shows a client mode connection from WebSphere Application Server to IBM MQ for z/OS. The figure shows the following configuration:

  • Queue managers 1 and 2 are members of a IBM MQ queue-sharing group that hosts a shared queue, Q1. The shared queue is located in a coupling facility. The two queue managers run in different LPARs.
  • A "client" connection is used when the application server and queue manager are running on different hosts. This is a TCP/IP network connection that is used to communicate with the queue manager. A client connection is also known as "socket attach".
    • Multiple application servers have a client mode (TCP/IP) connection to the queue managers. All the client mode connections are managed by the z/OS sysplex distributor, which selects either queue manager 1 or queue manager 2 for each connection request.
    Figure 2. WebSphere Application Server with client mode connection to IBM MQ for z/OS
    IBM MQ queue manager 1 is running in LPAR 1, and IBM MQ queue manager 2 is running in LPAR 2. The queue managers are members of a IBM MQ queue-sharing group that hosts a shared queue, Q1, located in a coupling facility. Several WebSphere Application Server application servers connect to the queue managers using a client mode connection. All the connections go through the z/OS sysplex distributor, which selects either queue manager 1 or queue manager 2 for each connection request.

As with the bindings mode connection example, this networking topology can benefit from "pull" workload balancing if several application instances running in the same or different application servers are processing messages from the same shared queue.

The use of the z/OS sysplex distributor improves availability for this networking topology. If one of the queue managers fails, the z/OS sysplex distributor can connect applications running in the application servers to the other queue manager, without waiting for the failed queue manager to restart. In the example used here, if queue manager 1 fails, the z/OS sysplex distributor can select queue manager 2 for every connection request, until queue manager 1 restarts.

Note: In this networking topology, IBM MQ for z/OS GROUP units of recovery must be enabled on all the queue managers in the queue-sharing group. TCP/IP (client mode) connections that accept any queue manager use GROUP units of recovery. GROUP units of recovery are not supported by versions of IBM MQ for z/OS earlier than Version 7.0.1. Bindings mode connections do not require GROUP units of recovery.

Icon that indicates the type of topic Concept topic



Timestamp icon Last updated: March 5, 2017 17:24
File name: cmm_mq_top02_qsg.html