WebSphere Message Broker, Version 8.0.0.7 Operating Systems: AIX, HP-Itanium, Linux, Solaris, Windows, z/OS

See information about the latest product version

Resolving implementation problems when developing message flows

Use the advice given here to help you to resolve some common problems that can arise when running message flows.

Messages are directed to the Failure terminal of an MQInput node

  • Scenario: Messages that are received at a message flow are directed immediately to the Failure terminal on the MQInput node (if it is connected), or are rolled back.
  • Explanation: When a message is received by WebSphere® MQ, an error is signalled if the following conditions are all true:
    • The MQInput node requests that the message content is converted (the Convert property is set to yes on the node).
    • The message consists only of an MQMD followed by the body of the message.
    • The message format, as specified in the MQMD, is set to MQFMT_NONE.

    This error causes the message to be directed to the Failure terminal.

  • Solution: In general, you do not need to request WebSphere MQ to convert the message content, because the broker processes messages in all code pages and encodings that are supported by WebSphere MQ. Set the Convert property to no to ensure that messages flow from the MQInput node to successive nodes in the message flow.

Error message BIP2211 is issued on z/OS by the MQInput node

  • Scenario: The following error message is issued by the MQInput node, indicating an invalid attribute:
    BIP2211: (Invalid configuration message containing attribute value [attribute value] 
    which is not valid for target attribute [target attribute name], 
    object [object name]; valid values are [valid values])
  • Explanation: On z/OS®, WebSphere MQ supports serialized access to shared resources, such as shared queues, through the use of a connection tag (serialization token) when an application connects to the queue manager that participates in a queue sharing group. In this case, an invalid attribute has been specified for the z/OS serialization token.
  • Solution: Check that the value that is provided for the z/OS serialization token conforms to the rules as described in the Application Programming Reference section of the WebSphere MQ Version 7 Information Center online.

Messages enter the message flow but do not exit

  • Scenario: You have sent messages into your message flow, and they have been removed from the input queue, but nothing appears at the other end of the message flow.
  • Explanation: Several situations might cause this error to occur. Consider the following scenarios to try to identify the situation that is causing your failure:
    1. Check your message flow in the WebSphere Message Broker Toolkit.

      You might have connected the MQInput node Failure terminal to a successive node instead of the Out terminal. The Out terminal is the middle terminal of the three. Messages directed to an unconnected Out terminal are discarded.

    2. If the Out terminal of the MQInput node is connected correctly to a successive node, check the broker's local error log for an indication that message processing has been ended because of problems. Additional messages give more detailed information.

      If the Failure terminal of the MQInput node has been connected (for example, to an MQOutput node), these messages do not appear.

      Connecting a node to a Failure terminal of another node indicates that you have designed the message flow to deal with all error processing. If you connect a Failure terminal to an MQOutput node, your message flow ignores all errors that occur.

    3. If the Out terminal of the MQInput node is connected correctly to a successive node, and the local error log does not contain error messages, turn user tracing on for the message flow:
      1. Open the WebSphere Message Broker Explorer.
      2. In the Navigator view, expand the Brokers folder.
      3. Right-click the message flow, and click User TraceNormal.

      This action produces a user trace entry from only the nodes that the message visits.

      On distributed systems, you can retrieve the trace entries by using the mqsireadlog command, format them by using the mqsiformatlog command, and view the formatted records to check the path of the message through the message flow.

      z/OS platformFor z/OS, edit and submit the BIPRELG job in COMPONENTPDS to execute the mqsireadlog and mqsiformatlog commands to process traces.

    4. If the user trace shows that the message is not taking the expected path through the message flow, increase the user trace level to Debug by selecting the message flow, right-clicking it, and clicking User Trace > Debug.

      Send your message into the message flow again. Debug-level trace produces much more detail about why the message is taking a particular route, and you can then determine the reasons for the actions taken by the message flow.

      Do not forget to turn tracing off when you have solved the problem, because performance might be adversely affected.

    5. If the MQPUT command to the output queue that is defined on the MQOutput node is not successful (for example, the queue is full or put is disabled), the final destination of a message depends on:
      • Whether the Failure terminal of the MQOutput node is connected.
      • Whether the message is being processed transactionally (which in turn depends on the transaction mode setting of the MQInput node, the MQOutput node, and the input and output queues).
      • Whether the message is persistent or nonpersistent. When transaction mode is set to the default value of Automatic, message transactionality is derived from the way that it was specified at the input node. All messages are treated as persistent if transaction mode=yes, and as nonpersistent if transaction mode=no.
      In general, if a path is not defined for a failure (that is, neither the Catch terminal nor the Failure terminal of the MQInput node is connected):
      • Non-transactional messages are discarded.
      • Transactional messages are rolled back to the input queue to be tried again:
        • If the backout count of the message is less than the backout threshold (BOTHRESH) of the input queue, the message is tried again and sent to the Out terminal.
        • When the backout count equals or exceeds the backout threshold, one of the following might happen:
          • The message is placed on the backout queue, if one is specified (using the BOQNAME attribute of the input queue.)
          • The message is placed on the dead-letter queue, if there is no backout queue defined or if the MQPUT to the backout queue fails.
          • If the MQPUT to the dead-letter queue fails, or if there is no dead-letter queue defined, then the message flow loops continuously trying to put the message to the dead-letter queue.
      • If a path is defined for the failure, then that path defines the destination of the message. If both the Catch terminal and the Failure terminal are connected, the message is propagated through the Catch terminal.
    6. If your message flow uses transaction mode=yes on the MQInput node properties, and the messages are not appearing on an output queue, check the path of the message flow.
      • If the message flow has paths that are not failures (but that do not end in an output queue), either:
        • The message flow has not failed and the message is not backed out.
        • The message flow is put to an alternative destination (for example, the Catch terminal, the dead-letter queue, or the queue's backout queue).
      • Check that all possible paths reach a final output node and do not reach a dead end. For example, check that you have:
        • Connected the Unknown terminal of a Filter node to another node in the message flow.
        • Connected both the True and False terminals of a Filter node to another node in the message flow.

Your execution group is not reading messages from the input queues

  • Scenario: Your execution group has started, but is not reading messages from the specified input queues.
  • Explanation: A started execution group might not read messages from the input queues of the message flows because previous errors might have left the queue manager in an inconsistent state.
  • Solution: Complete the following steps:
    1. Stop the broker.
    2. Stop the WebSphere MQ listener.
    3. Stop the WebSphere MQ channel initiator.
    4. Stop the WebSphere MQ queue manager.
    5. Restart the WebSphere MQ queue manager.
    6. Restart the WebSphere MQ channel initiator.
    7. Restart the WebSphere MQ listener.
    8. Restart the broker.

The execution group ends while processing messages

  • Scenario: While processing a series of messages, the execution group (DataFlowEngine) process size grows steadily without levelling off. This situation might cause the DataFlowEngine process to end if it cannot allocate more memory, and restart. The error message BIP2106 might be logged to indicate the out of memory condition.
    In addition, if you are using DB2® on distributed systems, you might get the message:
    SQL0954C  Not enough storage is  available in the application heap to process 
    the statement.

    z/OS platformOn z/OS, an SQLSTATE of HY014 might be returned with an SQL code of -99999, indicating that the DataFlowEngine process has reached the DB2 z/OS process limit of 254 prepared SQL statement handles.

  • Explanation: When a database call is made from within a message flow node, the flow constructs the appropriate SQL, which is sent using ODBC to the database manager. As part of this process, the SQL statement is prepared using the SQLPrepare function, and a statement handle is acquired so that the SQL statement can be executed.

    For performance reasons, after the statement is prepared, the statement and handle are saved in a cache to reduce the number of calls to the SQLPrepare function. If the statement is already in the cache, the statement handle is returned so that it can be re-executed with newly bound parameters.

    The statement string is used to perform the cache lookup. By using hardcoded SQL strings that differ slightly for each message, the statement is not found in the cache, and an SQLPrepare function is always performed (and a new ODBC cursor is opened). When using PASSTHRU statements, use parameter markers so that the same SQL prepared statement can be used for each message processed, with the parameters being bound at run time. This approach is more efficient in terms of database resources and, for statements that are executed repeatedly, it is faster.

    However, it is not always possible to use parameter markers, or you might want to dynamically build the SQL statement strings at run time. This situation potentially leads to many unique SQL statements being cached. The cache itself does not grow that large, because these statements themselves are generally not big, but many small memory allocations can lead to memory fragmentation.

  • Solution: If you encounter these types of situations, disable the caching of prepared statements by setting the MQSI_EMPTY_DB_CACHE environment variable to an arbitrary value. When this environment variable has been created, the prepared statements for that message flow are emptied at the end of processing for each message. This action might cause a slight performance degradation because every SQL statement is prepared.

Your execution group hangs, or ends with a core dump

  • Scenario: While processing a message, an execution group either hangs with high CPU usage, or ends with a core dump. The stack trace from the core dump or abend file is large, showing many calls on the stack. Messages written to the system log might indicate "out of memory" or "bad allocation" conditions. The characteristics of the message flow in this scenario often include a hard-wired loop around some of the nodes.
  • Explanation: When a message flow thread executes, it requires storage to perform the instructions that are defined by the logic of its connected nodes. This storage comes from the execution group's heap and stack storage. The execution of a message flow is constrained by the stack size, the default value of which differs depending on the operating system.
  • Solution: If a message flow that is larger than the stack size is required, you can increase the stack size limit and then restart the brokers that are running on the system so that they use the new value. For information on setting the stack size for your operating system, see System resources for message flow development.

Your XSLTransform node is not working after deployment and errors are issued indicating that the style sheet could not be processed

  • Scenario: Your XSLTransform node is not working after deploying resources, and errors are displayed indicating that the style sheet could not be processed.
  • Solution:
    • If the broker cannot find the style sheet or XML files that are required, migrate the style sheets or XML files with relative path references.
    • If the contents of a style sheet or XML file are damaged and therefore no longer usable (for example, if a file system failure occurs during a deployment), redeploy the damaged style sheet or XML file.

Output messages are not sent to expected destinations

  • Scenario: You have developed a message flow that creates a destination list in the LocalEnvironment tree. The list might contain queues for the MQOutput node, labels for a RouteToLabel node, or URLs for an HTTPRequest node. However, the messages are not reaching these destinations, and there are no error messages.
  • Solution:
    • Check that you have set Compute mode to a value that includes the LocalEnvironment in the output message, for example All. The default setting of Compute mode is Message, and all changes that you make to LocalEnvironment are lost.
    • Check your ESQL statements. The content and structure of LocalEnvironment are not enforced, so the ESQL editor (and content assist) does not provide guidance for field references, and you might have specified one or more of these references incorrectly.

      Some example procedures to help you set up destination lists are provided in Populating Destination in the local environment tree. You can use these procedures unchanged, or modify them for your own requirements.

You experience problems when sending a message to an HTTP node's URL

  • Scenario: Sending a message to an HTTP node's URL causes a timeout, or the message is not sent to the correct message flow.
  • Explanation: The following rules are true when URL matching is performed:
    • There is one-to-one matching of HTTP requests to HTTPInput nodes. For each HTTP request, only one message flow receives the message. This statement is true even if two message flows are listening on the same URL. Similarly, you cannot predict which MQInput node that is listening on a particular queue will receive a message.
    • Messages are sent to wildcard URLs only if no other URL is matched. Therefore a URL of /* receives all messages that do not match another URL.
    • Changing a URL in an HTTPInput node does not automatically remove the entry from the HTTP listener. For example, if a URL /A is used first, then changed to a URL of /B, the URL of /A is still used to listen on, even though there is no message flow to process the message. This incorrect URL does get removed after the broker has been stopped and restarted twice.
  • Solution: To find out which URL the broker is currently listening on, look at the file wsplugin6.conf in the following location:
    • Linux platformUNIX platformOn Linux and UNIX: /var/mqsi/components/broker_name/config
    • Windows platformOn Windows, %ALLUSERSPROFILE%\IBM\MQSI\components\broker_name\config%ALLUSERSPROFILE%\IBM\MQSI\components\broker_name\config, where %ALLUSERSPROFILE% is the environment variable that defines the system working directory. The default directory depends on the operating system. Your computer might not use the standard value; use %ALLUSERSPROFILE% to ensure that you access the correct location.

    If problems persist, empty wsplugin6.conf, restart the broker, and redeploy the message flows.

When using secure HTTP connections, you change a DNS host's destination but the broker is using a cached DNS host definition

  • Scenario: You are using a broker with secure HTTP connections that use the Java™ virtual machine (JVM). You have changed a DNS destination, but the broker is using a cached DNS host definition, therefore you have to restart the broker to use the new definition.
  • Explanation: By default, Java caches the host lookup from DNS, which is not appropriate if you want to look up the host name each time or if you want to cache it for a limited amount of time. This situation occurs only when you use SSL connections. (When using a secure HTTPS connection, the HTTPRequest node uses the SSL protocol, which issues Java calls, whereas a non-SSL protocol uses native calls.)

    To avoid this situation, you can empty the cache on the JVM by setting the networkaddress.cache.ttl property to zero. This property dictates the caching policy for successful name lookups from the name service. The value is specified as an integer to indicate the number of seconds for which to cache the successful lookup. The default value of this property is -1, which indicates that the successful DNS lookup value is cached indefinitely in the JVM. If you set this property to 0 (zero), the successful DNS lookup is not cached.

  • Solution: To pick up DNS entry changes without the need to stop and restart the broker and JVM, disable DNS caching. Edit file $JAVA_HOME/jre/lib/security/java.security, and set the value of the networkaddress.cache.ttl property to 0 (zero).
    Then, run the following command:
    mqsichangeproperties <BrokerName> -e <ExecutionGroup> -o ComIbmJVMManager -n jvmSystemProperty -v "-Dsun.net.inetaddr.ttl=0"

The TimeoutControl node issues error message BIP4606 or BIP4607 when the timeout request start time that it receives is in the past

  • Scenario: When a TimeoutControl node receives a timeout request message that contains a start time in the past, it issues error message BIP4606 or BIP4607: The Timeout Control Node '&2' received a timeout request that did not contain a valid timeout start date/time value.
  • Explanation: The start time in the message can be calculated by adding an interval to the current time. If a delay occurs between the node that calculates the start time and the TimeoutControl node, the start time in the message will have passed by the time it reaches the TimeoutControl node. If the start time is more than approximately five minutes in the past, a warning is issued and the TimeoutControl node rejects the timeout request. If the start time is less than five minutes in the past, the node processes the request as if it were immediate.
  • Solution: Ensure that the start time in the timeout request message is a time in the future.

You are using a TimeoutControl node with a TimeoutNotification node, with multiple clients running concurrently, and messages appear to be being dropped

  • Scenario: You are using a TimeoutControl node with a TimeoutNotification node, with multiple clients running concurrently, and messages appear to be being dropped. In the timeout request message, allowOverwrite is set to TRUE.
  • Explanation: If multiple clients are running concurrently, and allowOverwrite is set to TRUE in the timeout request message, messages can overwrite each other.
  • Solution: Ensure that different TimeoutNotification nodes that are deployed on the same broker do not share the same unique identifier.

Error message BIP5347 is issued on AIX when you run a message flow that uses a message set

  • Scenario: Error message BIP5347 (MtilmbParser2: RM has thrown an unknown exception) is issued on AIX® in either of these circumstances:
    • When you are deploying a message set
    • When you are running a message flow that uses a message set
  • Explanation: BIP5347 is typically caused by a database exception, and it is issued when an execution group tries to load an MRM dictionary for use by a message flow. This process involves two steps:
    1. The execution group retrieves the dictionary and wire format descriptors from the broker data store.
    2. The execution group stores the dictionary in the memory that a message flow would use to process an MRM message.

    BIP5347 is typically issued during step 1. This problem can appear to be intermittent; if you restart the execution group, the message is sometimes processed correctly.

    BIP5347 might also be caused by the presence of a datetime value constraint in the message set, which causes the error each time the message set is deployed.

  • Solution: To identify the cause of the error, capture a service level debug trace to confirm that the database exception is occurring.
    • If the error is caused by the presence of a datetime value constraint, a message similar to the following message appears in the service level debug trace (the exact message depends on the datetime constraint in the message set):
      Unable to parse datetime internally, 9, 2001-12-17T09:30:47.0Z, 
      yyyy-MM-dd'T'HH:mm:ss.SZZZ  
      This error occurs because the MRM element in question has a datetime value that is not compatible with the datetime format string, so the dictionary is rejected. To solve this problem, ensure that the datetime value is compatible with the datetime format string.

Error message BIP2130 is issued with code page value of -1 or -2

  • Scenario: The following error message is issued:
    BIP2130: Error converting a character string to or from codepage [code page value]
    where [code page value] is either -1 or -2. You have not, however, specified a code page of either -1 or -2 in your message tree. You have, however, used one of the WebSphere MQ constants MQCCSI_EMBEDDED or MQCCSI_INHERIT.
  • Explanation: The WebSphere MQ constants MQCCSI_EMBEDDED and MQCCSI_INHERIT are resolved when the whole of the message tree is serialized to produce the WebSphere MQ bit stream. This happens when the message is put on the WebSphere MQ transport. Until that time, these values exist in the message tree as either -1 (for MQCCSI_EMBEDDED) or -2 (for MQCCSI_INHERIT). If one or more parts of the message tree are serialized independently, such as with a ResetContentDescriptor node or ESQL ASBITSTREAM function, this error occurs.
  • Solution: You do not have to set MQCCSI_EMBEDDED or MQCCSI_INHERIT in the message tree's CodedCharSetId field. You can achieve the same result by explicitly setting the required CodedCharSetId to the previous header's CodedCharSetId value. For example, you would need to replace:
    SET OutputRoot.MQRFH2.(MQRFH2.Field)CodedCharSetId = MQCCSI_INHERIT;
    with
    SET OutputRoot.MQRFH2.(MQRFH2.Field)CodedCharSetId = InputRoot.MQMD.CodedCharSetId;
    where the MQMD folder is the header preceding the MQRFH2 header.

The execution group restarts before an MQGet node has retrieved all messages

  • Scenario: You have created a message flow that contains an MQGet node. Not all of the messages are retrieved from the queue because the execution group restarts before the node has retrieved all the messages. No abend files are generated.
  • Explanation: In WebSphere Message Broker, processing that involves nested or recursive processing can cause extensive use of the stack. Message flow processing occurs in a loop until the MQGet node has retrieved all the messages from the queue. Each time that processing returns to the MQGet node, the stack size increases.
  • Solution: Use a PROPAGATE statement. The statement propagates each message through the message flow in a loop, but each time that processing returns to the PROPAGATE statement, the stack is cleared.

    Use an ESQL variable (for example, set Environment.Complete to true) in the environment tree to terminate the ESQL loop, stop the propagations, and wait for the next trigger message. If you need to store content from the messages, store it in the environment tree because other trees are deleted when message flow processing returns to the PROPAGATE statement. For more information about how to use this statement, see PROPAGATE statement.

Notices | Trademarks | Downloads | Library | Support | Feedback

Copyright IBM Corporation 1999, 2016Copyright IBM Corporation 1999, 2016.

        
        Last updated:
        
        Last updated: 2016-05-23 14:47:37


Task topicTask topic | Version 8.0.0.7 | au16534_