When you collect message flow statistics, you can choose the output destination for the data.
Before message flow accounting and statistics can be collected, you must ensure that the publication of events has been enabled and a pub/sub broker has been configured. For more information, see Configuring the publication of event messages and Configuring the built-in MQTT pub/sub broker.
If you start the collection of message flow statistics data by using the web user interface, the statistics are emitted in JSON format in addition to any other formats that are already being emitted. If the output format was previously not specified and therefore defaulted to the user trace, the newly specified format replaces the default, and the data is no longer emitted to the user trace. However, if user trace has been explicitly specified, any additional formats that are selected subsequently are emitted in addition to the user trace.
If you use the mqsichangeflowstats command to explicitly specify the required output formats, the formats specified by the command replace the formats that are currently being emitted for the message flow (they are not added to them).
If you stop statistics collection from the web user interface, all output formats are turned off. If statistics collection is subsequently restarted by using the mqsichangeflowstats command, the output format is reset to the default value of user trace, unless other formats are specified on the command. However, if statistics collection is restarted by using the web user interface, data is collected in JSON format.
Statistics data is written to the specified output location in the following circumstances:
You can specify that the data that is collected is written to the user trace log. The data is written even when trace is switched off.
If no output destination is specified for accounting and statistics, the default is the user trace log. If one or more output formats are subsequently specified, the specified formats replace the default, and the data is no longer emitted to the user trace. However, if user trace has been explicitly specified, any additional formats that are selected subsequently are emitted in addition to the user trace.
The data is written to one of the following locations:
For information about the user trace entries, see User trace entries for message flow accounting and statistics data.
You can specify that the data that is collected is published in JSON format, which is available for viewing in the web user interface. If statistics collection is started through the web user interface, statistics data is emitted in JSON format in addition to any other formats that are already being emitted.
$SYS/Broker/integrationNodeName/Statistics/JSON/SnapShot/integrationServerName/applications/application_name
/libraries/library_name/messageflows/message_flow_name
IBM/IntegrationBus/integrationNodeName/Statistics/JSON/SnapShot/integrationServerName/applications/application_name
/libraries/library_name/messageflows/message_flow_name
For information about the JSON publication, see JSON publication for message flow accounting and statistics data.
You can specify that the data that is collected is published in XML format and is available to subscribers registered in the integration node network that subscribe to the correct topic.
$SYS/Broker/integrationNodeName/StatisticsAccounting/record_type/integrationServerName/message_flow_label
IBM/IntegrationBus/integrationNodeName/StatisticsAccounting/record_type/integrationServerName/message_flow_label
Subscribers can include filter expressions to limit the publications that they receive. For example, they can choose to see only snapshot data, or to see data that is collected for a single integration node. Subscribers can specify wild cards (+ and #) to receive publications that refer to multiple resources. Use + to receive resources on one topic level, and # to receive resources across multiple topic levels.
$SYS/Broker/IBNODE/StatisticsAccounting/#
or
IBM/IntegrationBus/IBNODE/StatisticsAccounting/#
$SYS/Broker/IBNODE/StatisticsAccounting/Archive/default/Flow1
orIBM/IntegrationBus/IBNODE/StatisticsAccounting/Archive/default/Flow1
$SYS/Broker/IBNODE/StatisticsAccounting/+/default/Flow1
or
IBM/IntegrationBus/IBNODE/StatisticsAccounting/+/default/Flow1
For help with registering your subscriber, see Message display, test and performance utilities SupportPac (IH03).
For information about the XML publication, see XML publication for message flow accounting and statistics data.
You can specify that the data that is collected is published in comma-separated value (.csv) format. Snapshot and archive data records are written to output files, which include a header with the field name. The fields for averages are optional, and are written only if the averages property of the statistics file writer is set to true.
One line is written for each message flow that is producing data for the time period that you choose. For example, if MessageFlowA and MessageFlowB are both producing archive data over a period of 60 minutes, both message flows produce a line of statistics data every 60 minutes.
For more information about the CSV records, see CSV file format for message flow accounting and statistics data.
You can specify that the data that is collected is published in bluemix format, which can be sent to IBM® Cloud Log Analysis and viewed in a Kibana dashboard.
For more information, see Reporting logging and statistics data to IBM Cloud Log Analysis and displaying it in a Kibana dashboard.
On z/OS, you can specify that the data collected is written to SMF. Accounting and statistics data uses SMF type 117 records. SMF supports the collection of data from multiple subsystems, and you might therefore be able to synchronize the information that is recorded from different sources.
To interpret the information that is recorded, use any utility program that processes SMF records.
For information about the SMF records, see z/OS SMF records for message flow accounting and statistics data.