WebSphere MQ is messaging software that enables communication between InterChange Server and the connectors.
This section describes how to install and configure WebSphere MQ, used natively or as a Java Messaging Service (JMS) provider for use in an InterChange Server environment. See "Configuring WebSphere MQ for JMS" to configure WebSphere MQ as a JMS provider.
Use JMS when the following conditions apply to your environment:
Under the described conditions, use WebSphere MQ as a JMS provider rather than natively because the native MQ relies on CORBA for its administration and other components. When used as a JMS provider there is no longer a reliance on CORBA. Additionally native MQ only persists incoming events to the server.
Install WebSphere MQ on the same network as InterChange Server. This installation involves the following general steps:
Each of these steps is described in more detail in the subsequent sections.
The WebSphere MQ software is installed in the mqm subdirectory of the /MQ_inst_home (for the components of WebSphere MQ) and /var (for the working data) directories. Therefore, these directories (or filesystems) must have sufficient space to hold WebSphere MQ.
It is recommended that you create and mount the following directories as file systems: /var/mqm, /var/mqm/log, /var/mqm/errors. It is also recommended that the logs be placed on a different physical drive from the one used for the queues (/var/mqm). Table 9 lists the space requirements for the WebSphere MQ components.
For WebSphere MQ to run, it needs a special user account called mqm.
AIX |
---|
|
Solaris |
---|
HP-UX |
---|
|
Linux (Red Hat and SuSE) |
---|
|
On many UNIX systems, leaving an asterisk (*) as the second field in the entry for mqm in the /etc/passwd file disables the account. Consult your system manual if you have other login verification mechanisms in place.
Ensure that the mqm group is the default group for the WebSphere business integration administrator (admin by default).
The default group for a user account is in the fourth field in the entry for the WebSphere business integration administrator account. This field needs to contain the group number of the mqm group. To obtain the group number, you can run the following command from the shell prompt:
grep mqm /etc/group
The group number is in the third field of the resulting line of output. Insert this group number into the default group field of the WebSphere business integration administrator's entry in /etc/passwd.
While you are root, you can use the groups command to verify that mqm is listed in the output of groups to which root has membership. For information on the WebSphere business integration administrator, see Creating the IBM WebSphere business integration administrator account.
On Red Hat Linux, it is recommended that you change the environment variable LD_ASSUME_KERNEL by adding the following line to the .bash_profile of the user that will install WebSphere MQ (mqm):
export LD_ASSUME_KERNEL=2.4.19
You should then execute the .bash_profile by issuing the command
. .bash_profile
from a command prompt.
It is recommended that you use the following installation location for the WebSphere MQ software:
If the /WebSphere_MQ_inst_home and /var filesystems do not have enough space, you can create an extract directory for the WebSphere MQ software (such as /home_dir/mqm) and move into this directory. You must create symbolic links from the /WebSphere_MQ_inst_home/mqm and /var/mqm directories to this extract directory.
For more information, see Determining space requirements.
IBM delivers the supported version of the WebSphere MQ software on separate CD-ROMs. These CDs contain several directories of software to be installed on your system.
To verify the version of WebSphere MQ in your current environment, type the mqver command at the /WebSphere_MQ_inst_home/mqm/bin prompt.
The following steps provide a brief overview of the WebSphere MQ installation process:
To install WebSphere MQ in the /WebSphere_MQ_inst_home and /var directories on Solaris:
pkgadd -d /mq_cd/mq_solaris
where mq_cd is the mount point of the WebSphere MQ CD.
# Default conversions are enabled by creating two lines similar to the # two following, but removing the # character which indicates a comment. default 0 500 1 1 0 default 0 850 1 2 0
The business integration system requires that you configure queues with the properties listed below. Specify the name of each of these queues as a standard property in the connector's configuration file.
Proceed to Starting InterChange Server for the first time.
Programs are invoked when a connection is made at a certain port. The WebSphere MQ Listener uses port 1414. Therefore, you must edit the system files listed in Setting up ports to configure port 1414 to start the WebSphere MQ Listener.
Configuring WebSphere MQ Listener for a single instance of InterChange Server--One instance of InterChange Server on a UNIX machine uses the WebSphere MQ Queue Manager. The WebSphere MQ Listener uses the default port 1414. Therefore, you must edit the system files listed in Setting up ports to configure port 1414 to start the WebSphere MQ Listener.
To configure port 1414 for the WebSphere MQ Listener:
WebSphereMQ 1414/tcp # WebSphere MQ channel listener
Use tabs between the columns of information so that they are aligned with existing /etc/services entries.
WebSphereMQ stream tcp nowait mqm /WebSphere_MQ_inst_home/mqm/bin/amqcrsta amqcrsta -m your-queue-name.queue.manager
where your-queue-name is the name of your WebSphere MQ Queue Manager.
This entire command is a single line in the /etc/inetd.conf file. Use tabs between fields so that they line up with previous entries in the file. Enter this line exactly as shown. The contents of this file are case-sensitive.
ps -ef | grep inetd
Do not use the process ID of the output line that has "grep inetd" in the last column.
kill -HUP proc_id
For example, suppose the ps command in step 3 generates the following output for the inetd process:
root 144 1 0 17:01:40 ? 0:00 /usr/sbin/inetd -s
Because the second column is the process ID, the kill command is:
kill -HUP 144
Alternatively, you can reboot the system in order that the inetd daemon rereads the /etc/inetd.conf file.
Configuring WebSphere MQ Listeners for multiple instances of InterChange Server --Multiple instances of InterChange Server can share the same WebSphere MQ Queue Manager. However, if one of these instances needs to stop the Queue Manager, all other instances lose access to the Queue Manager. For example, if the development and quality-control instances of InterChange Server are on the same machine, you might want to configure these instances so that you can stop and start the Queue Manager for one of these instances without affecting the other.
The WebSphere MQ Listener listens for WebSphere MQ Queue Managers on a TCP/IP port. However, you cannot have more than one Queue Manager on a TCP/IP port. Therefore, to have more than one Queue Manager on a computer, you must configure each Queue Manager on a separate port. For each port, you must edit the system files listed in Setting up ports to configure the ports that start the WebSphere MQ Listeners.
To configure multiple WebSphere MQ Listeners:
For example, to configure ports 1414 and 1415 for two WebSphere MQ Listeners, add the following lines to /etc/services:
WebSphereMQ1 1414/tcp # WebSphere MQ listener for q1.queue.manager WebSphereMQ2 1415/tcp # WebSphere MQ listener for q2.queue.manager
Use tabs between the columns of information so that they are aligned with existing /etc/services entries.
For example, to start up two Queue Managers (q1.queue.manager and q2.queue.manager), add the following lines to /etc/inetd.conf:
WebSphereMQ1 stream tcp nowait mqm /MQ_inst_home/mqm/bin/amqcrsta amqcrsta -m q1.queue.manager
WebSphereMQ2 stream tcp nowait mqm /WebSphere_MQ_inst_home/mqm/bin/amqcrsta amqcrsta -m q2.queue.manager
Use tabs between fields so that they line up with previous entries in the file. Enter this line exactly as shown. The contents of the file are case-sensitive.
Installer assumes that the Queue Manager included the name of the local InterChange Server. If you establish a Queue Manager that has another queue name, the WebSphere business integration administrator must enter this name as part of the installation process.
InterChange Server assumes that it communicates with an WebSphere MQ Queue Manager on port 1414. If InterChange Server is to communicate with a Queue Manager on a port other than 1414, the WebSphere business integration administrator must, as part of the InterChange Server installation, add the PORT configuration parameter to the MESSAGING section of the InterchangeSystem.cfg file. To set this PORT parameter, the WebSphere business integration administrator must know the port number to assign to it.
You can configure the WebSphere MQ queues needed for your adapter, using any of the following methods:
WebSphere Business Integration Adapters provides a set of script files that you can run to configure the WebSphere MQ queues needed for the adapters you are deploying.
The following script files are located in ProductDir/mqseries:
The contents of the crossworlds_mq.tst file are shown below. You must manually edit this file. The top portion of the file contains the native MQ information and the bottom portion contains that JMS-specific information. You can use this one file to specify the queues needed by each adapter you are configuring. Edit the file as follows:
DEFINE QLOCAL(IC/SERVER_NAME/DestinationAdapter) DEFINE QLOCAL(AP/DestinationAdapter/SERVER_NAME)
These apply only to business integration systems that use WebSphere InterChange Server.
*******************************************************************/ * */ * Define the local queues for all Server/Adapter pairs. */ * For MQ queues, they must have the following definition: */ * Application = DEFINE QLOCAL (AP/AdapterName/ServerName) */ * */ * Example: */ * DEFINE QLOCAL(AP/ClarifyConnector/CrossWorlds) */ * */ * DEFINE QLOCAL(AP/SAPConnector/CrossWorlds) */ * */ * If your server is named something different than 'CrossWorlds' */ * make sure to change the entries to reflect that. */ ********************************************************************/ DEFINE QLOCAL(IC/SERVER_NAME/DestinationAdapter) DEFINE QLOCAL(AP/DestinationAdapter/SERVER_NAME) ********************************************************************/ * For each JMS queue (delivery Transport is JMS), * default values follow the convention: * AdapterName/QueueName ********************************************************************/ DEFINE QLOCAL(AdapterName/AdminInQueue) DEFINE QLOCAL(AdapterName/AdminOutQueue) DEFINE QLOCAL(AdapterName/DeliveryQueue) DEFINE QLOCAL(AdapterName/RequestQueue) DEFINE QLOCAL(AdapterName/ResponseQueue) DEFINE QLOCAL(AdapterName/FaultQueue) DEFINE QLOCAL(AdapterName/SynchronousRequestQueue) DEFINE QLOCAL(AdapterName/SynchronousResponseQueue) ********************************************************************/ * Define the default CrossWorlds channel type */ ********************************************************************/ DEFINE CHANNEL(CHANNEL1) CHLTYPE(SVRCONN) TRPTYPE(TCP) ********************************************************************/ * End of CrossWorlds MQSeries Object Definitions */ ********************************************************************/
For information about configuring queues using WebSphere MQ commands, see the WebSphere MQ: System Administration Guide and the WebSphere MQ: Script (MQSC) Command Reference.
Proceed to Starting InterChange Server for the first time.
WebSphere MQ makes use of semaphores and shared memory. Most likely, the default Solaris or HP-UX kernel configuration is not adequate to support these features. Therefore, you must edit the kernel configuration file, /etc/system, so that WebSphere MQ can run correctly.
Table 17 lists the kernel configuration parameters for Solaris and Table 18 lists the kernel configuration parameters
for HP-UX. These parameters are added to the lower section of the
/etc/system file.
Table 17. Solaris kernel configuration settings for WebSphere MQ
set msgsys:msginfo_msgmap=1026 set msgsys:msginfo_msgmax=4096 set msgsys:msginfo_msgmnb=4096 set msgsys:msginfo_msgmni=50 set semsys:seminfo_semaem = 16384 set semsys:seminfo_semmap = 1026 set semsys:seminfo_semmni = 1024 set semsys:seminfo_semmns = 16384 set semsys:seminfo_semmnu=2048 set semsys:seminfo_semmsl = 100 set semsys:seminfo_semopm = 100 set semsys:seminfo_semume = 256 set shmsys:shminfo_shmmax = 209715200 set shmsys:shminfo_shmmin = 1 set shmsys:shminfo_shmmni=1024 set shmsys:shminfo_shmseg = 1024 |
Table 18. HP-UX kernel configuration settings for WebSphere MQ
set Shmmax=0x3908b100 set Shmseg=1024 set Shmmni=1024 set Shmem=1 set Sema=1 set Semaem=16384 set Semvmx=32767 set Semmns=16384 set Semmni=2048 set Semmap=2050 set Semmnu=2048 set Semume=256 set Msgmni=1025 set Msgtql=2048 set Msgmap=2050 set Msgmax=65535 set Msgmnb=65535 set Msgssz=16 set Msgseg=32767 set Maxusers=400 set Max_thread_proc=4096 set maxfiles=2048 set nfile=10000 |
If you incorrectly enter a kernel configuration parameter in the /etc/system file, you see an error message when the system reboots. In this case, fix the error in /etc/system and reboot the system again.
For every connector configured for use with WebSphere MQ for JMS transport, use the Connector Configurator tool to edit the local connector's configuration file.
Specify a queue manager and configure the property values as listed in Table 19. In this example, JmsConnector is the
connector being configured
Table 19. Property Values for JMS Transport
Property | Value |
---|---|
AdminInQueue | JMSCONNECTOR\ADMININQUEUE |
AdminOutQueue | JMSCONNECTOR\ADMINOUTQUEUE |
DeliveryQueue | JMSCONNECTOR\DELIVERYQUEUE |
FaultQueue | JMSCONNECTOR\FAULTQUEUE |
RequestQueue | JMSCONNECTOR\REQUESTQUEUE |
ResponseQueue | JMSCONNECTOR\RESPONSEQUEUE |
SynchronousRequestQueue | JMSCONNECTOR\SYNCHRONOUSREQUESTQUEUE |
SynchronousResponseQueue | JMSCONNECTOR\SYNCHRONOUSRESPONSEQUEUE |
You can leave the UserName and Password blank unless you are accessing the queue manager using the client mode.
Reload the repository and restart InterChange Server and the connector after you make these changes.
You might need to revise the default configuration of your WebSphere MQ message queues in order to handle large numbers of messages or objects of a large size.
To revise the maximum allowable depth of the message queue and the maximum allowable length of messages, set values for the MAXDEPTH and MAXMSGL properties in the appropriate .tst file, as described in the following procedure.
WebSphere MQ message queues are set up by default to hold up to 5000 messages. During times of high traffic volumes or an initial conversion to InterChange Server, this default might be exceeded, causing errors and preventing connectors from posting messages to ICS. To help avoid this, you can increase the maximum number of messages allowed in a queue and the maximum number of uncommitted messages allowed across all queues. The preferred values might vary according to your specific circumstances. For example, if you are performing an initial conversion to InterChange Server, it is recommended that you set the maximum queue depth to at least 20,000 messages.
To change the MAXDEPTH setting, after each queue definition, add the following options:
ALTER QLOCAL (QUEUENAME) MAXDEPTH (DEPTH DESIRED)
For example:
DEFINE QLOCAL(AP/EMailConnector/Server_Name) ALTER QLOCAL(AP/EMailConnector/Server_Name) MAXDEPTH(20000)
You can also alter the queue manager to allow for more than the standard uncommitted messages across all queues. The number of allowed uncommitted messages should be the sum of the maximum message depth (MAXDEPTH) of each queue. The memory used by InterChange Server should not increase unless the number of uncommitted messages increases.
To change the MAXUMSGS setting, add the following line:
ALTER QMGR MAXUMSGS (NUMBER)
For example:
ALTER QMGR MAXUMSGS (400000)
Modify this value only if you know you have business objects larger than the default MAXMSGL value of 4 MB. To change the MAXMSGL value, add the following command after each queue definition:
ALTER QLOCAL (QUEUENAME) MAXMSGL (Maximum number of bytes to allow in a message)