Start of change
IBM App Connect Enterprise, Version 11.0.0.2 Operating Systems: Windows, Linux


Output formats for message flow accounting and statistics data

When you collect message flow statistics, you can choose the output destination for the data.

You can select one or more of the following destinations, by setting the outputFormat property in the configuration file for your integration node (node.conf.yaml) or integration server (server.conf.yaml): If no format is specified, accounting and statistics data is sent to the user trace log by default.
Start of changeYou can change the output format and destination of statistics data (snapshot, archive, or both) by setting the outputFormat property in the configuration file for your integration node (node.conf.yaml) or integration server (server.conf.yaml. You can set the output format of snapshot statistics data to one or more of the following values (separated by commas): You can set the output format of archive statistics data to one or more of the following values (separated by commas): For more information about configuring the collection and publishing of message flow accounting and statistics data, see Configuring the collection of message flow accounting and statistics data.End of change

Before message flow accounting and statistics can be collected, you must ensure that the publication of events has been enabled and a pub/sub broker has been configured. For more information, see Configuring the publication of event messages and Configuring the built-in MQTT pub/sub broker.

If you start the collection of message flow statistics data by using the web user interface, the statistics are emitted in JSON format in addition to any other formats that are already being emitted. If the output format was previously not specified and therefore defaulted to the user trace, the newly specified format replaces the default, and the data is no longer emitted to the user trace. However, if user trace has been explicitly specified, any additional formats that are selected subsequently are emitted in addition to the user trace.

Statistics data is written to the specified output location in the following circumstances:

User trace entries

You can specify that the data that is collected is written to the user trace log. The data is written even when trace is switched off.

If no output destination is specified for accounting and statistics, the default is the user trace log. If one or more output formats are subsequently specified, the specified formats replace the default, and the data is no longer emitted to the user trace. However, if user trace has been explicitly specified, any additional formats that are selected subsequently are emitted in addition to the user trace.

The data is written to one of the following locations:

Windows platformWindows
If you set the work path by using the -w parameter of the mqsicreatebroker command, the location is workpath\Common\log.
If you have not specified the integration node work path, the location is:
  • On Windows:C:\ProgramData\IBM\MQSI\Common\log.
Linux platformUNIX platformLinux
/var/mqsi/common/log

For information about the user trace entries, see User trace entries for message flow accounting and statistics data.

JSON publication

You can specify that the data that is collected is published in JSON format, which is available for viewing in the web user interface. If statistics collection is started through the web user interface, statistics data is emitted in JSON format in addition to any other formats that are already being emitted.

The topic on which the data is published has the following structure:
  • For publications on an MQ pub/sub broker:
    $SYS/Broker/integrationNodeName/Statistics/JSON/SnapShot/integrationServerName/applications/application_name
    /libraries/library_name/messageflows/message_flow_name
  • For publications on an MQTT pub/sub broker:
    IBM/IntegrationBus/integrationNodeName/Statistics/JSON/SnapShot/integrationServerName/applications/application_name
    /libraries/library_name/messageflows/message_flow_name
The variables correspond to the following values:
integrationNodeName
The name of the integration node for which statistics are collected
integration_server_name
The name of the integration server for which statistics are collected
application_name
The name of the application for which statistics are collected
library_name
The name of the library for which statistics are collected
message_flow_name
The name of the message flow for which statistics are collected

For information about the JSON publication, see JSON publication for message flow accounting and statistics data.

XML publication

You can specify that the data that is collected is published in XML format and is available to subscribers registered in the integration node network that subscribe to the correct topic.

The topic on which the data is published has the following structure:
  • For publications on an MQ pub/sub broker:
    $SYS/Broker/integrationNodeName/StatisticsAccounting/record_type/integrationServerName/message_flow_label
  • For publications on an MQTT pub/sub broker:
    IBM/IntegrationBus/integrationNodeName/StatisticsAccounting/record_type/integrationServerName/message_flow_label
The variables correspond to the following values:
integrationNodeName
The name of the integration node for which statistics are collected.
record_type
Set to SnapShot or Archive, depending on the type of data to which you are subscribing. Alternatively, use + to register for both snapshot and archive data if it is being produced. This value is case sensitive and must be entered as SnapShot.
integrationServerName
The name of the integration server for which statistics are collected.
message_flow_label
The label on the message flow for which statistics are collected.

Subscribers can include filter expressions to limit the publications that they receive. For example, they can choose to see only snapshot data, or to see data that is collected for a single integration node. Subscribers can specify wild cards (+ and #) to receive publications that refer to multiple resources. Use + to receive resources on one topic level, and # to receive resources across multiple topic levels.

The following examples show the topic with which a subscriber registers to receive different sorts of data:
  • Register the following topic for the subscriber to receive data for all message flows running on an integration node named INODE:
    $SYS/Broker/INODE/StatisticsAccounting/#
    or
    IBM/IntegrationBus/INODE/StatisticsAccounting/#
  • Register the following topic for the subscriber to receive only archive statistics that relate to a message flow Flow1 running on integration server default on integration node INODE:
    $SYS/Broker/INODE/StatisticsAccounting/Archive/default/Flow1
    or
    IBM/IntegrationBus/INODE/StatisticsAccounting/Archive/default/Flow1
  • Register the following topic for the subscriber to receive both snapshot and archive data for message flow Flow1 running on integration server default on integration node INODE:
    $SYS/Broker/INODE/StatisticsAccounting/+/default/Flow1
    or
    IBM/IntegrationBus/INODE/StatisticsAccounting/+/default/Flow1

For help with registering your subscriber, see Message display, test and performance utilities SupportPac (IH03).

For information about the XML publication, see XML publication for message flow accounting and statistics data.

CSV records

You can specify that the data that is collected is published in comma-separated value (.csv) format. Snapshot and archive data records are written to output files, which include a header with the field name. The fields for averages are optional, and are written only if the averages property of the statistics file writer is set to true.

One line is written for each message flow that is producing data for the time period that you choose. For example, if MessageFlowA and MessageFlowB are both producing archive data over a period of 60 minutes, both message flows produce a line of statistics data every 60 minutes.

For more information about the CSV records, see CSV file format for message flow accounting and statistics data.

IBM Cloud Log Analysis events

You can specify that the data that is collected is published in bluemix format, which can be sent to IBM® Cloud Log Analysis and viewed in a Kibana dashboard.

For more information, see Reporting logging and statistics data to IBM Cloud Log Analysis and displaying it in a Kibana dashboard.


ac19106_.htm | Last updated 2018-11-02 14:45:17
End of change