This section outlines some common problems that can arise when developing message flows. It contains advice for dealing with the problems:
If you are not the author of the UDNs, you can delete these files. The author, or vendor, should provide the 5.0 version of these UDNs in the form of an Eclipse feature and plugin. The plugin should contain icons, translation, palette definition, infopop, help, and so on.
See Editor preferences and localized settings for more information.
/flow2/schema1/SAMPLE.conxmi cannot be loaded. The following error was reported: schema1/SAMPLE.conxmi
This practice is beneficial because the passed reference supports content-assistance and validation for ESQL. The message type content properties open, or open defined are not used in validation, and the assumption is that this property is closed.
There is an ESQL editor preference that lets you choose to ignore message reference mismatches, or to have them reported as a warning or an error. By default, this type of problem is reported as a warning, so you can still deploy the message flow.
This error causes the message to be directed to the failure terminal.
You might have accidentally connected the MQInput node failure terminal to a successive node instead of the out terminal. The out terminal is the middle terminal of the three. Messages directed to an unconnected out terminal are discarded.
If the failure terminal of the MQInput node has been connected, for example, to an MQOutput node, these messages do not appear.
Connecting a node to a failure terminal of any node indicates that you have designed the message flow to deal with all error processing. If you connect a failure terminal to an MQOutput node, your message flow ignores any errors that occur.
See Using trace for more information.
This action produces a user trace entry from every node that the message visits, and only those nodes.
On the distributed platforms, you can retrieve the trace entries using the mqsireadlog command, format them using the mqsiformatlog command, and view the formatted records to check the path of the message through the message flow. See Commands for more information about these commands.
For z/OS, edit and submit the job BIPJLOG in the COMPONENTPDS to execute mqsireadlog and mqsiformatlog to process traces. See z/OS utility jobs for more information about the utility commands.
Send your message into the message flow again. Debug level trace produces much more detail about why the message is taking the route that it is taking, and you can then determine the reasons for the actions taken by the message flow.
Don't forget to turn tracing off when you have solved the problem, or performance will be adversely affected.
In general:
If the message flow has paths that are not failures, but that do not end in an output queue (or other persistent store), the message flow has not failed and the message is not backed out, or put to an alternative destination (for example, the catch terminal, the dead-letter queue, or the queue's backout queue).
SQL0954C Not enough storage is available in the application heap to process the statement.On z/OS, an SQLSTATE of HY014 might be returned with an SQL code of -99999, indicating that the DataFlowEngine process has reached the DB2 z/OS process limit of 254 prepared SQL statement handles.
For performance reasons, after the statement is prepared, the statement and handle are saved in a cache to reduce the number of calls to the SQLPrepare function. If the statement is already in the cache, the statement handle is returned so that it can be re-executed with newly bound parameters.
The statement string is used to perform the cache lookup. By using hardcoded SQL strings that differ slightly for each message, the statement is not found in the cache, and an SQLPrepare function is always performed (and a new ODBC cursor is opened). When using PASSTHRU statements, use parameter markers so that the same SQL prepared statement can be used for each message processed, with the parameters being bound at runtime. This approach is more efficient in terms of database resources and, for statements that are repeatedly executed, it is faster.
However, it is not always possible to use parameter markers, or you might want to dynamically build the SQL statement strings at runtime. This potentially leads to many unique SQL statements being cached. The cache itself does not grow that large since these statements themselves are generally not big, but many small memory allocations can lead to memory fragmentation.
For any given message flow, a typical node requires about 2KB of the thread stack space. By default, there is therefore a limit of approximately 500 nodes within a single message flow on the UNIX platform and 1000 nodes on the Windows platform. This limit might be higher or lower, depending on the type of processing being performed within each node.
This environment variable setting applies to brokers, so the MQSI_THREAD_STACK_SIZE is used for every thread that is created within a DataFlowEngine process. If the execution group has many message flows assigned to it, and a large MQSI_THREAD_STACK_SIZE is set, the DataFlowEngine process will need a large amount of storage for the stack.
Although a 500-node message flow might seem a large limit that will not be reached, you could execute this many nodes by putting a loop in your message flow or by wiring an appropriate output terminal to an input terminal of a node that occurs earlier in the flow. You might choose to do this when processing repeating records within a message. The loop-back would be controlled by a filter node checking for remaining records, and a compute node that keeps a count of the number of records processed. Visually the flow might consist of a small number of nodes. However, on each iteration of the loop, the nodes within the loop are executed as new nodes within a sequential order. The number of nodes within the loop, multiplied by the number of iterations of the loop, could therefore reach the 500 node limit.
Do not use hard-wired looping,increasing the MQSI_THREAD_STACK_SIZE, for this. If you need to loop a message flow to process repeating records for example, use the ESQL PROPAGATE statement. This allows looping to be performed within a compute node, and allows the rest of the message flow to be driven, before returning to the point in the compute node where the propagate call was made. This also means that only a single repeating record needs to be passed to the rest of the flow at a time. When the processing returns to the Propagating Compute node, the storage for the previous set of output trees is freed. From a stack usage perspective, there is no stack build up because the message flow is unwound back to the point where the PROPAGATE call was issued. Only one iteration of the current loop is ever on the stack at any one time.
Video_Test#FCMComposite_1_1 ComIbmMQInputNode , Video_Test.VIDEO_XML_IN
JITC_COMPILEOPT=SKIP{org/eclipse/ui/views/tasklist/TaskListContentProvider}{resourceChanged}
If you need to parse or modify the data contained within a WebSphere MQ Everyplace message, use an MQeMbMsgObject. This provides a parallel with standard WebSphere MQ messages: you can set fields such as correlation ID, and there is a field that can be parsed using any WebSphere Business Integration Message Broker parser.
Related concepts
WebSphere MQ Everyplace messages
Message flows
Related tasks
Developing message flow applications
Handling errors in message flows
Using trace
Dealing with problems
Accessing the Properties tree
Related reference
WebSphere MQ Mobile Transport
Message flows
MQInput node
Notices |
Trademarks |
Downloads |
Library |
Support |
Feedback
![]() ![]() |
au16530_ |