WebSphere Message Broker, Version 8.0.0.7 Operating Systems: AIX, HP-Itanium, Linux, Solaris, Windows, z/OS

See information about the latest product version

Problems when developing message flows with file nodes

Use the advice given here to help you to resolve some common problems that can arise when you develop message flows that contain file nodes.

A file node flow stops processing files and error message BIP3331 or BIP3332 is issued

  • Scenario: Files in the specified input directory are not being processed. Error message BIP3331 or BIP3332 is issued.
  • Explanation: The error messages explain that the FileInput node encountered an exception and could not continue file processing. This problem can be caused when the FileInput node cannot move files from its input directory to the archive or backout directory because of file system permissions or another file in the target directory preventing the file to be transferred. In this situation, the node is unable to process input files without losing data so processing stops. Two messages are issued; the first is either BIP3331 or BIP3332 which specifies a second message which describes the cause of the problem in more detail.
  • Solution: If the first error message issued is BIP3331, stop the flow and resolve the problem. The FileInput node is unable to complete successful processing of the file.
    1. Stop the flow.
    2. Find the error message referenced in the BIP3331. This second error message identifies the problem and the files and directories causing it.
    3. Ensure the broker has the required access to these files and directories.
    4. You might need to move, delete or rename files in the archive or transit directories.
    5. Check whether the input file causing the problem has been successfully processed (except for being moved to the archive or backout directory). If it has been successfully processed, remove it from the input directory.
    6. Restart the flow.

    If the first error message issued is BIP3332, you do not need to stop the flow because the FileInput node has detected the problem before starting file processing. Find the error message referenced in the BIP3332 message. This second error message identifies the problem and the files and directories causing it.

During processing of a large file, error message BIP2106 is issued or the broker stops because of insufficient memory

  • Scenario: Large input files cause the broker to issue messages, or stop, because insufficient memory is available.
  • Explanation: The FileInput node can process very large files. Subsequent processing in the flow attached to its Out terminal might require more memory than is available to the broker.
  • Solution: In most cases, the FileInput node imposes a limit of 100 MB on the records propagated to the attached flow. If your application needs to access large amounts of data, you might need to increase its available memory and reduce the number of available instances. See Resolving problems with performance for more information. If your application needs to process messages larger than 100 MB, you can override the FileInput node record size limit by taking the following actions:
    • Before starting the broker, set the environment variable MQSI_FILENODES_MAXIMUM_RECORD_LENGTH to the required limit as an integer number of bytes, for example:
      SET MQSI_FILENODES_MAXIMUM_RECORD_LENGTH=268435456
    • When the broker first initializes a FileInput node, it will use the environment variable value instead of the default value of 100 MB. Subsequent changes to the environment variable value will not affect the broker limit until the broker is restarted.
    • If the Record detection property is set to Whole File, the limit applies to the file size. If the Record detection property is set to Fixed Length or Delimited, the limit applies to the record size. The FileOutput node is not affected by changes to this limit.
    • Note: Increasing the FileInput node record size limit might require additional broker resources, particularly memory. You must thoroughly test and evaluate your broker's performance when processing these files. The number of factors that are involved in handling large messages make it impossible to provide specific broker memory requirements.
    You can also reduce the memory required to process the file's contents in the following ways:
    • If you are processing a whole file as a single BLOB, split it into smaller messages by specifying on the Records and Elements tab of the FileInput node's properties:
      • A value of Fixed Length in the Record detection property
      • A large value in the Length property, for example 1000000.
    • If you are writing the file's contents to a single output file, specify Record is Unmodified Data in the FileOutput node's Record definition property; this reassembles the records in an output file of the same size as the input file. Wire the FileInput node's End of Data terminal to the FileOutput node's Finish File terminal. Configure the flow to have no additional instances to ensure that the output records arrive in sequence.
    • If you are processing large records using the techniques shown in the Large Messaging sample, ensure that you do not cause the execution group to access the whole record. Avoid specifying a value of $Body in the Pattern property of a Trace node.
    • If you have specified a value of Parsed Record Sequence in the FileInput node's Record definition property, the broker does not limit the size of the record. If subsequent nodes in the message flow try to access an entire large record, the broker might not have sufficient memory to allow this and stop. Use the techniques in the Large Messaging sample to limit the memory required to handle very large records.

Missing or duplicate messages after recovery from failure in a flow attached to a FileInput node

  • Scenario: After the failure of a message flow containing a FileInput node processing the input file as multiple records, a subsequent restart of the flow results in duplicate messages being processed. If the flow is not restarted, some input records are not processed.
  • Explanation: If a record produces a message which causes the flow to fail and retry processing does not solve the problem, the node stops processing the file and moves it to the backout directory. Records subsequent to the failing message are not processed. The FileInput node is not transactional; it cannot roll back the file input records. Transactional resources in the attached flow can roll back the effects of the failing input record but not preceding records. Records before the failing record will have been processed but records subsequent to the failing record will not have been processed. If you restart the flow by moving the input file from the backout directory to the input directory, messages from records preceding the point of failure are duplicated.
  • Solution: If the input messages have unique keys, modify your flow to ignore duplicate records. If the messages do not have unique keys but each input file has a unique name, you can modify your flow to form a unique key based on the file name and record number. Define a database table and add a Database node to your flow to record the key of each record that is processed. Add a DatabaseRoute node to filter input messages so that only records without keys already in the database are processed. See the Simplified Database Routing sample to understand how to use the DatabaseRoute node to filter messages.

    If you cannot generate unique keys for each record, split your flow into two separate flows. In the first flow, wire the FileInput node to an MQOutput node so that each input record is copied as a BLOB to a WebSphere® MQ queue. Ensure there are adequate WebSphere MQ resources, queue size for example, so that the first flow does not fail. In the second flow, wire an MQInput node to the flow previously wired to your FileInput node. Configure the MQInput and other nodes to achieve the desired transactional behavior.

No file is created in the output directory after FileOutput node processing

  • Scenario: A file created by the FileOutput node does not appear in the output directory. The node is configured so that the Record definition property has a value of Record is Unmodified Data, Record is Fixed Length Data, or Record is Delimited Data and the flow runs one or more times.
  • Explanation: The FileOutput node accumulates messages, record by record, in an incomplete version of the output file in the transit subdirectory of the output directory. It moves the file from the transit subdirectory to the output directory only when it receives a message on its Finish File terminal; at this point, the file is complete. If the node's input processing fails before a message is sent to the Finish file terminal, the file remains in the transit directory. The file might be completed by a subsequent flow if it uses the same file name and output directory; if this does not happen, the file is never moved to the output directory
  • Solution: If you need to ensure that incomplete files are moved to the output directory if the input flow fails, wire the input node's Failure terminal to the FileOutput node's Finish File terminal, in addition to all other flows that are wired to this terminal.

    If you need all output files to be available for a downstream process at a particular time or after a particular event, wire a separate flow to the FileOutput node's Finish File terminal to send a message at that particular time or on that particular event. If duplicate messages which identify the same file are sent to the Finish File terminal, the FileOutput node ignores them.

    If your flows use the Request directory property location, Request file name property location (default Directory and Name in the $LocalEnvironment/Destination/File folder), or $LocalEnvironment/Wildcard/WildcardMatch, ensure that messages sent to the Finish File terminal contain the correct elements and values to identify the output file and directory.

Output file name overrides have not been applied

  • Scenario: The message elements set in the flow to override the output file name or directory values specified in the FileOutput node's Basic properties have not been applied. The output file is created using the name and directory set in the FileOutput node's Basic properties.
  • Explanation: One of the following might be the cause of this problem:
    • The message sent to the FileOutput node does not contain the expected changes.
    • The FileOutput node is configured to use different elements in the message from the ones set to the new values.
    • Not all messages contain the overriding values.
  • Solution: Use the debugger or a Trace node inserted in front of the FileOutput node's In terminal to check that the expected overriding values appear in the correct message elements. If they do not, check that the Compute mode property has been set correctly in Compute nodes that are upstream in the flow; for example, if $LocalEnvironment/File/Name has not changed following a Compute node, check that the Compute node has its Compute mode property set to LocalEnvironment and Message.

    If the message elements are set correctly, check that the FileOutput node's Request directory property location and Request file name property location properties identify the correct elements in the message.

    If you have specified Record is Unmodified DataRecord is Fixed Length Data, or Record is Delimited Data in the FileOutput node's Record definition property, ensure that messages that go to the Finish File terminal have the same override values as those that go to the in terminal. Unless you do this, the Finish file terminal message and the In terminal messages will apply to different files.

Notices | Trademarks | Downloads | Library | Support | Feedback

Copyright IBM Corporation 1999, 2016Copyright IBM Corporation 1999, 2016.

        
        Last updated:
        
        Last updated: 2016-05-23 14:47:40


Task topicTask topic | Version 8.0.0.7 | au55470_