Installation tasks

The following installation tasks must be performed to implement MQ intercommunication:

Planning the installation

Before you install and configure the Remote Agent, you should address a number of planning considerations, including the following:

Who will be responsible for establishing the configurations at the spoke sites?

Because the implementer at the hub site typically has primary responsibility for planning the overall process, this appendix describes the necessary installation tasks for both the hub and spoke sites.

What are the security needs of the hub site? The spoke site?

Your security requirements can differ from those of your trading partners, and there might be different requirements among your trading partners. See Security for some of the choices that you can make in setting the configuration properties that define your levels of security.

What configuration properties need to be coordinated between the hub and spoke sites?

Certain configuration properties, port numbers, and some security settings, need to be coordinated between the hub and spoke sites.

Configuring the IBM Java ORB for use with Remote Agents

On the hub site, the IBM Java ORB and its Transient Naming Server are installed automatically with the ICS installer. For communication between the ICS and adapters over the Internet, configure a fixed port with the OAport configuration parameter on both the spoke and hub sites.

Note:
The port number for the hub (ICS) port identifying the channel for information flowing from an adapter to ICS must be different from the number for the spoke port identifying the channel for information flowing from the ICS to an adapter.

For more information on the OAport parameter, see its description under the CORBA section of the ICS configuration file under OAport. You must also set up the IBM MQ Trigger Monitor, as described in the section Setting up an Object Activation Daemon.

Configuring the Remote Agent

The Remote Agent can be configured for use with either Native WebSphere MQ or HTTP/HTTPS protocols for communication over the Internet. The Native WebSphere MQ option is configured using only the software delivered with the product. The HTTP option requires WebSphere MQ Internet Pass-Thru (MQIPT), which is not delivered and must be purchased separately. This section describes both configurations.

Note:
JMS is the only supported transport for both configurations.

Native WebSphere MQ

This configuration option uses the WebSphere MQ protocol, along with Security Socket Layer (SSL) to ensure secure communication over the Internet. This configuration provides better performance; however, it requires that a port be opened on the firewall to allow WebSphere MQ traffic through the firewall. See Figure 21.

Channels must be configured for bidirectional communication between InterChange Server and the remote agent. Two channels are required; one for each direction.

Note:
The following steps assume that MQ1 and MQ2 are listening on port 1414.

To configure channels for Native WebSphere MQ

  1. Channel 1 (MQ1 is the sender and MQ2 is the receiver):
    1. Create the CHANNEL1 sender channel on MQ1.
    2. Create the CHANNEL1 receiver channel on MQ2.
  2. Channel 2 (MQ2 is the sender and MQ1 is the receiver):
    1. Create the CHANNEL2 sender channel on MQ2.
    2. Create the CHANNEL2 receiver channel on MQ1.
  3. Configure firewall 1 to forward traffic on port 1414 to MQ1 and configure firewall 2 to forward traffic on port 1414 to MQ2.
    Note:
    Assume that MQ1 and MQ2 are listening on port 1414 and that the firewall allows network traffic based on port forwarding. The actual configuration might change, depending on the type of firewall being used.
  4. Set the IP address of sender Channel 1 to the connection name of firewall 2.
  5. Set the IP address of sender Channel 2 to the connection name of firewall 1.

To configure queues for Native WebSphere MQ

Note:
Refer to Configuring WebSphere MQ for JMS for more information on setting up JMS queues.

  1. MQ1 (Q1 is used for server to agent communication):
    1. Set Q1 as the remote queue and Q2 as the local queue.
    2. Set MQ2 as the remote queue manager for Q1.
  2. MQ2 (Q2 is used for agent to server communication):
    1. Set Q2 as the remote queue and Q1 as the local queue.
    2. Set MQ1 as the remote queue manager for Q2.
  3. Set up a transmission queue on each queue manager.
  4. Set up a dead letter queue on each queue manager.
  5. Confirm that the fault queue is local to each queue manager.

Refer to the RemoteAgentSample.mqsc and RemoteServerSample.mqsc sample scripts, located in ProductDir/mqseries for how to configure the queue managers.

InterChange Server, by default, creates queue managers with mixed cases, for example: ICS430.queue.manager. However, when defining the queues needed for Remote Access, WebSphere MQ automatically converts all queue names to uppercase. The configuration for remote queue definitions is case sensitive and this leads to a problem with messages failing to flow out of the queues. The solution is to go into MQ Explorer and edit the Remote Queue Manager field for all the Remote Queue definitions to have the proper case (for both Queue Managers).

It is possible to have the InterChange Server and adapter reside in the Intranet, with Application Servers in the demilitarized zone (DMZ). Such a configuration is acceptable provided that the adapter is not configured as a remote agent. If the adapter and Application Server are in different subnets then the only way to have the adapter communicate with the application server is to explicitly include both the hostname and IP address of the Application Server in the /etc/hosts file of the adapter machine.

Figure 21. Native WebSphere MQ configuration

The figure shows the Native WebSphere MQ configuration. The flow chart is linear and bidirectional from a "Server" on the left to an "Agent" on the right. The Server node is connected to MQ1 (these two nodes reside in an area labelled "Intranet") which then communicates across a vertical line labelled Firewall 1. A demilitarized zone (DMZ) separates the Firewall from the internet where the WebSphere MQ resides. The figure is then symmetric with a second DMZ, Firewall 2, MQ2 and finally the Agent node (the Agent and MQ2 part of the diagram is also labelled "Intranet"). In this configuration, Q1 is used for Server to Agent communication and Q2 for Agent to Server communication.

HTTP/HTTPS

This configuration option uses WebSphere MQ Internet Pass Through (MQIPT) to pass information over the Internet using HTTP. See Figure 22.

You must define routes to specify the port, IP address, and SSL details. Two routes must be configured for bidirectional communication between InterChange Server and the agent. Two routes at each MQIPT are required; one for each direction.

Channels must be configured for bidirectional communication between InterChange Server and the agent. Two channels are required; one for each direction.

Note:
The following steps assume that MQ1 and MQ2 are listening on port 1414.

To configure channels for HTTP/HTTPS

  1. Channel 1 (MQ1 is the sender and MQ2 is the receiver):
    1. Create the CHANNEL1 sender channel on MQ1.
    2. Create the CHANNEL1 receiver channel on MQ2.
  2. Channel 2 (MQ2 is the sender and MQ1 is the receiver):
    1. Create the CHANNEL2 sender channel on MQ2.
    2. Create the CHANNEL2 receiver channel on MQ1.
  3. Set the ConnectionName of CHANNEL1 to the IP address and ListenerPort of MQIPT1.
  4. Set the ConnectionName of CHANNEL2 to the IP address and ListenerPort of MQIPT2.
  5. Set firewall 1 to forward all traffic on the ListenerPort to MQIPT1.
  6. Set firewall 2 to forward all traffic on the ListenerPort to MQIPT2.

To configure queues for HTTP/HTTPS

Note:
Refer to Configuring WebSphere MQ for JMS for more information on setting up JMS queues.
  1. MQ1 (Q1 is used for server to agent communication):
    1. Set Q1 as the remote queue and Q2 as the local queue.
    2. Set MQ2 as the remote queue manager for Q1.
  2. MQ2 (Q2 is used for agent to server communication):
    1. Set Q2 as the remote queue and Q1 as the local queue.
    2. Set MQ1 as the remote queue manager for Q2.
  3. Set up a transmission queue on each queue manager.
  4. Set up a dead letter queue on each queue manager.
  5. Confirm that the fault queue is local to each queue manager.

Refer the RemoteAgentSample.mqsc and RemoteServerSample.mqsc sample scripts, located in ProductDir/mqseries to configure the queue managers.

To configure routes for MQIPT1

To configure routes for MQIPT2

Figure 22. HTTP/HTTPS configuration

The figure shows the configuration for HTTP/HTTPS. The flow chart is linear, bidirectional and symmetric from a Server on the far left to an Agent on the far right. There are 3 major areas, labelled Intranet/Internet/Intranet separated by Firewall's 1 and 2. The central "Internet" portion (also labelled HTTP/HTTPS) is further divided into a central portion called "Internet" flanked by demilitarized zones (DMZs) which abut the firewalls. The Server communicates through MQ1 and MQIPT1 nodes, which then communicate through the firewalls to the MQIPT2 an MQ2 nodes before finally reaching the Agent.

Enabling the application to interact with the connector agent

For some applications, setup tasks are required to enable the connector agent to create, update, retrieve, or delete data in the application. Such setup tasks are described in the appropriate IBM documentation for specific connectors.

Starting the Remote Agent components

Remote Agent requires that the following be running:

Copyright IBM Corp. 1997, 2004