Knowledge Center         Contents    Previous  Next    Index  
Platform Computing Corp.

Reporting

Reporting is a feature of Platform LSF. It allows you to look at the overall statistics of your entire cluster. You can analyze the history of hosts, resources, and workload in your cluster to get an overall picture of your cluster's performance.

Contents

Introduction to Reporting

An efficient cluster maximizes the usage of resources while minimizing the average wait time of workload. To ensure that your cluster is running efficiently at all times, you need to analyze the activity within your cluster to see if there are any areas for improvement.

The reporting feature uses the data loader controller service, the job data transformer service, and the data purger service to collect data from the cluster, and to maintain this data in a relational database system. The reporting feature collects the cluster data from a relational database system and displays it in reports either graphically or in tables. You can use these reports to analyze and improve the performance of your cluster, and to troubleshoot configuration problems.

You can access the reporting feature from the HPC Portal.

Standard and custom reports

Platform has provided a set of standard reports to allow you to immediately analyze your cluster without having to create any new reports. These standard reports provide the most common and useful data to analyze your cluster.

You may also create custom reports to perform advanced queries and reports beyond the data produced in the standard reports.

The database

The reporting feature optionally includes the Apache Derby database, a JDBC-based relational database system. The Derby database is a small-footprint, open source database, and is only appropriate for demo clusters. If you want to use the reporting feature to produce regular reports for a production cluster, you must use a supported commercial database.

The reporting feature supports Oracle 9i, Oracle 10g, and MySQL 5.x databases.

important:  
The Apache Derby database is not supported for any production clusters.

Getting Started with Standard Reports

For your convenience, Platform has provided several standard reports for you to use. These reports allow you to keep track of some useful statistics in your cluster.

Standard reports overview

Standard reports are based on raw data stored in the relational database, and do not perform any data aggregation or calculations.

The following is a list of the standard reports that are included with the reporting feature. For further details on a report, open its full description as described in View the full description of a report.

Table 4: Standard reports
Name
Description
Category
Cluster Availability - EGO
EGO host availability in a cluster.
EGO
Host Resource Usage
Resource usage trends for selected hosts.
EGO
Resource Allocation vs Resource Plan
Actual resource allocation compared to resource plan and unsatisfied resource demand for the selected consumer.
EGO
Active Job States Statistics by Queue
Number of active jobs in each active job state in a selected queue.
LSF
Cluster Availability - LSF
LSF host availability in an LSF cluster.
LSF
Cluster Job Hourly Throughput
Number of submitted, exited, and done jobs in a cluster.
LSF
Cluster Job Slot Utilization
Job slot utilization levels in your cluster.
LSF
Job Slot Usage by Application Tag
Job slots used by applications as indicated by the application tag.
LSF
Performance Metrics
Internal performance metrics trend for a cluster. You can only produce this report if you enabled performance metric collection in your cluster (badmin perfmon start)
LSF
Service Level Agreement (SLA)
Job statistics by job state over time, compared with SLA goals.
LSF
Hourly Desktop Job Throughput
Number of downloaded and completed jobs for each MED host or the entire cluster. You can only produce this report if you use LSF Desktop.
LSF Desktop
Desktop Utilization
Desktop utilization at each MED host or the entire cluster. You can only produce this report if you use LSF Desktop.
LSF Desktop
License Usage
The license usage under License Scheduler. You can only produce this report if you use LSF License Scheduler.
LSF License Scheduler
Jobs Forwarded to Other Clusters
The number of jobs forwarded from your cluster to other clusters. You can only produce this report if you use LSF MultiCluster.
LSF MultiCluster
Jobs Received from Other Clusters
The number of jobs forwarded to your cluster from other clusters. You can only produce this report if you use LSF MultiCluster.
LSF MultiCluster
View the full description of a report
  1. In the Console, navigate to Reports, then Standard Reports.
  2. Click the name of your report to open it.
  3. Click Report properties.

What can I do with standard reports?

Producing reports

The reports stored in the system do not include actual data. Instead, the reports define what data to extract from the system, and how to display it graphically.

Reports need to be produced before you can see the data. When you produce a report, you query the database and extract specific data. The amount of system overhead depends on how much data is in the report.

Standard reports have configurable parameters so you can modify the report and get exactly the data that you want.

Exporting reports

Data expires from the database periodically, so producing a report at a later date may return different data, or return no output at all. After you produce a report, you can keep your results by exporting the report data as comma-separated values in a CSV file. In this way you can preserve your data outside the system and integrate it with external programs, such as a spreadsheet. You can also keep your graphical results by using your browser to save the report results as an image.

Produce a standard report

  1. In the Console, navigate to Reports, then Standard Reports.
  2. Click the name of your report to open it.
  3. Set the report parameters as desired. Default settings are shown, but you can modify them to suit your needs.
  4. Click Produce Report.
  5. After a short time, the resulting data is displayed graphically.

When you close the report window, you lose the contents of the report unless you export it first.

Export report data

Once you produce a report, exporting is the best way to save the data for future use. You cannot produce the same report at a later date if the data has expired from the database.

  1. In the Console, produce and view your report.
  2. Click Export Report Data.
  3. In the browser dialog, specify the output path and name the exported file.
  4. In the Save as type field, specify "CSV".

Custom Reports

You can create and use custom reports if the standard reports are insufficient for your needs.

What are custom reports?

While standard reports are provided for your use by Platform, custom reports are reports you create as needed to satisfy specific reporting needs at your site.

Custom reports let you define combinations of data that are not available in the standard reports. Custom report output is always displayed in tabular format.

What can I do with custom reports?

Creating reports

The easiest way to create a custom report is to copy an existing report, then customize the SQL query string as desired. To customize the SQL query string, you may need to refer to the data schema, which describes the organization of information in the relational database. The data schema for each standard report is available in the Console by opening the report and clicking Help.

Even if you cannot edit SQL, saving a report as a custom report lets you re-use the report data without having to re-input the parameters in the standard report.

- If the time period is fixed, you get the same data every time you produce the report, but the report will be empty when the data expires from the database.

- If the time period is relative, you can get data for a different time period each time you produce the report.

You can also define custom reports from a blank template and input the SQL query string directly.

When you create custom reports, you can enter a category and use it to group the reports any way you want.

Deleting reports

Unlike standard reports, custom reports can be deleted. You might prefer to rename old reports (by modifying them) instead of deleting them.

Using reports

You produce custom reports and export the data in the same way as standard reports.

Data expires from the database periodically, so producing a report at a later date may return different data, or return no output at all. After you produce a report, you can keep your results by exporting the report data as comma-separated values in a CSV file. In this way you can preserve your data outside the system and integrate it with external programs, such as a spreadsheet. You can also keep your graphical results by using your browser to save the report results as an image.

If you ever want to modify parameters of a custom report, you must edit the SQL query string directly.

Create a custom report from an existing report

This method is convenient because you can extend an existing report. Examine your current standard and custom reports and select one with similar data sources or output to the new report that you want to create.

  1. In the Console, select the report that you want to copy, with all the parameters configured as you wish to copy them.
  2. Click Copy to New Custom Report.
  3. Edit the report properties and query string as desired.
    1. In the Report properties section, you should give the new report a unique name. You can also modify the report summary, description, and category.
    2. In the Report query section, you can modify the SQL query directly.
    3. To edit the SQL query, you will need to know about the data schema of the database. For further information on the data schema, refer to Platform LSF Reports Data Schema in the Platform LSF Knowledge Center.

    4. To validate your SQL query string and ensure that your report delivers the appropriate results, click Produce Report.
    5. This will actually produce the report, so you might want to limit your testing to a small set of data.

      You can continue to edit your SQL query string and test the results of your report until you are ready to save it.

  4. To finish, click Create.

To access your new custom report, navigate to Reports then Custom Reports.

Create a new custom report

Prerequisites: You must be able to construct valid query strings with Structured Query Language (SQL).

  1. In the Console, navigate to Reports then Custom Reports.
  2. Select Global Actions > Create Custom Report.
  3. Define the report properties and query string as desired.
    1. In the Report properties section, specify the report name, summary, description, and category.
    2. In the Report query section, input your SQL query string.
    3. For further information on the data schema, refer to Platform LSF Reports Data Schema in the Platform LSF Knowledge Center.

    4. To validate your SQL query string and ensure that your report delivers the appropriate results, click Produce Report.
    5. This will actually produce the report, so you might want to limit your testing to a small set of data.

      You can continue to edit your SQL query string and test the results of your report until you are ready to save it.

  4. To finish, click Create.

To access your new custom report, navigate to Reports then Custom Reports.

Modify a custom report

  1. In the Console, navigate to Reports then Custom Reports.
  2. Click the name of your report.
  3. Modify the report properties and query string as desired.
    1. Edit the report properties and SQL query string.
    2. For further information on the data schema, refer to Platform LSF Reports Data Schema in the Platform LSF Knowledge Center.

    3. To validate your SQL query string and ensure that your report delivers the appropriate results, click Produce Report.
    4. This will actually produce the report, so you might want to limit your testing to a small set of data.

      You can continue to edit your SQL query string and test the results of your report until you are ready to save it.

  4. To confirm your changes, click Save.

Produce a custom report

  1. In the Console, navigate to Reports then Custom Reports.
  2. Click the name of your report to open it.
  3. Click Produce Report.
  4. After a short time, the resulting data is displayed in tabular format.

When you close the report window, you will lose the contents of the report unless you export it first.

Export report data

Once you produce a report, exporting is the best way to save the data for future use. You cannot produce the same report at a later date if the data has expired from the database.

  1. In the Console, produce and view your report.
  2. Click Export Report Data.
  3. In the browser dialog, specify the output path and name the exported file.
  4. In the Save as type field, specify "CSV".

Delete a custom report

  1. In the Console, navigate to Reports then Custom Reports.
  2. Locate your report in the list.
  3. Select Actions > Delete Report.

System Description

The reporting feature is built on top of the Platform Enterprise Reporting Framework (PERF) architecture. This architecture defines the communication between your EGO cluster, relational database, and data sources via the PERF Loader Controller (PLC). The loader controller is the module that controls multiple loaders for data collection.

PERF architecture

The following diagram illustrates the PERF architecture as it relates to your cluster, reporting services, relational database, and data loaders.

Data loaders

The reporting feature collects cluster operation data using data loaders to load data into tables in a relational database. The data loaders connect to the database using a JDBC driver. The data loaders handle daylight savings automatically by using GMT time when collecting data.

Default data loaders

The following are lists of the data loaders and default behavior:

Table 5: LSF data loaders
Data loader name
Data type
Data gathering interval
Data loads to
Loader type
License Scheduler (bldloader)
license usage
5 minutes
BLD_LICUSAGE
polling
Desktop job (desktopjobdataloader) - Linux hosts only
job completion log
1 day
ACTIVE_DESKTOP_JOBDATA
polling
Desktop client (desktopclientdataloader) - Linux hosts only
client status (data from the WSClientStatus file)
10 minutes
ACTIVE_DESKTOP_SED_CLIENT
polling
Desktop active event (desktopeventloader) - Linux hosts only
downloaded and reported jobs (data from the event.log file)
each time an event is logged in event.log.
ACTIVE_DESKTOP_ACEVENT
for each event of type 2 (REPORT_JOB) and type 4 (COMPLETE_JOB)
polling
Host metrics (hostmetricsloader)
host-related metrics
5 minutes
RESOURCE_METRICS
RESOURCES_RESOURCE_METRICS
polling
Host properties (hostpropertiesloader)
resource properties
1 hour
LSF_RESOURCE_PROPERTIES
polling
Bhosts (lsfbhostsloader)
host utilization and state-related
5 minutes
LSF_BHOSTS
polling
LSF events (lsfeventsloader)
events with a job ID, performance events, resource events
5 minutes
LSB_EVENTS
LSB_EVENTS_EXECHOSTLIST
LSF_PERFORMANCE_METRIC
file
Resource properties (lsfresproploader)
shared resource properties
1 hour
LSF_RESOURCE_PROPERTIES
polling
SLA (lsfslaloader)
SLA performance
5 minutes
LSF_SLA
polling
Shared resource usage (sharedresusageloader)
shared resource usage
5 minutes
SHARED_RESOURCE_USAGE
SHARED_RESOURCE_USAGE_HOSTLIST
polling
Table 6: EGO data loaders
Data loader name
Data type
Data gathering interval
Data loads to
Loader type
Consumer resource (egoconsumerresloader)
resource allocation
5 minutes
CONSUMER_DEMAND
CONSUMER_RESOURCE_ALLOCATION
CONSUMER_RESOURCELIST
polling
Dynamic metric (egodynamicresloader)
host-related dynamic metric
5 minutes
RESOURCE_METRICS
RESOURCES_RESOURCE_METRICS
polling
EGO allocation events (egoeventsloader)
resource allocation
5 minutes
ALLOCATION_EVENT
file
Static attribute (egostaticresloader)
host-related static attribute
1 hour
ATTRIBUTES_RESOURCE_METRICS
RESOURCE_ATTRIBUTES
polling

System services

The reporting feature has system services, including the Derby service if you are running a demo database. If your cluster has PERF controlled by EGO, these service are run as EGO services. Each service uses one slot on a management host.

Loader controller

The loader controller service (plc) controls the data loaders that collect data from the system and writes the data into the database.

Data purger

The data purger service (purger) maintains the size of the database by purging old records from the database. By default, the data purger purges all data that is older than 14 days, and purges data every day at 12:30am.

Job data transformer

The job data transformer service (jobdt) converts raw job data in the relational database into a format usable by the reporting feature. By default, the job data transformer converts job data every hour at thirty minutes past the hour (that is, at 12:30am, 1:30am, and so on throughout the day).

Derby database

If you are running a demo database, the Derby database (derbydb) stores the cluster data. When using a supported commercial database, the Derby database service no longer runs as an EGO service.

Reports Administration

What do I need to know?

Reports directories

The reporting feature resides in various perf subdirectories within the LSF directory structure. This document uses LSF_TOP to refer to the top-level LSF installation directory. The reporting feature directories include the following:

Table 7: LSF reporting directory environment variables in UNIX
Directory name
Directory description
Default file path
$PERF_TOP
Reports framework directory
LSF_TOP/perf
$PERF_CONFDIR
Configuration files
LSF_TOP/conf/perf/cluster_name/conf
$PERF_LOGDIR
Log files
LSF_TOP/log/perf
$PERF_WORKDIR
Working directory
LSF_TOP/work/perf
$PERF_DATADIR
Data directory
LSF_TOP/work/cluster_name/perf/data
Table 8: LSF reporting directory environment variables in Windows
Directory name
Directory description
Default file path
%PERF_TOP%
Reports framework directory
LSF_TOP\perf
%PERF_CONFDIR%
Configuration files
LSF_TOP\conf\perf\cluster_name\conf
%PERF_LOGDIR%
Log files
LSF_TOP\log\perf
%PERF_WORKDIR%
Working directory
LSF_TOP\work\perf
%PERF_DATADIR%
Data directory
LSF_TOP\work\cluster_name\perf\data
Reporting services

The reporting feature uses the following services.

The Derby demo database uses the derbydb service.

If your cluster has PERF controlled by EGO, these services are run as EGO services.

You need to stop and restart a service after editing its configuration files. If you are disabling the reporting feature, you need to disable automatic startup of these services, as described in Disable automatic startup of the reporting services.

Log files for these services are available in the PERF_LOGDIR directory. There are seven logging levels that determine the detail of messages recorded in the log files. In decreasing level of detail, these are ALL (all messages), TRACE, DEBUG, INFO, WARN, ERROR, FATAL, and OFF (no messages). By default, all service log files log messages of INFO level or higher (that is, all INFO, WARN, ERROR, and FATAL messages). You can change the logging level of the plc service using the loader controller client tool as described in Dynamically change the log level of your loader controller log file, or the logging level of the other services as described in Change the log level of your log files.

Job data transformer

The job data is logged in the relational database in a raw format. At regular intervals, the job data transformer converts this data to a format usable by the reporting feature. By default, the data transformer converts the job data every hour at thirty minutes past the hour (that is, at 12:30am, 1:30am, and so on throughout the day).

To reschedule the transformation of data from the relational database to the reporting feature, you can change the data transformer schedule as described in Change the data transformer schedule.

If your cluster has PERF controlled by EGO, you can edit the jobdt.xml configuration file, but you need to restart the jobdt service and EGO on the master host after editing the file. The jobdt.xml file is located in the EGO service directory:

Loader controller

The loader controller manages the data loaders. By default, the loader controller manages the following data loaders:

You can view the status of the loader controller service using the loader controller client tool as described in View the status of the loader controller.

Log files for the loader controller and data loaders are available in the PERF_LOGDIR directory. There are logging levels that determine the detail of messages recorded in the log files. In decreasing level of detail, these are ALL (all messages), TRACE, DEBUG, INFO, WARN, ERROR, FATAL, and OFF (no messages). By default, all service log files log messages of INFO level or higher (that is, all INFO, WARN, ERROR, and FATAL messages). You can change the logging level of the plc service using the loader controller client tool as described in Dynamically change the log level of your loader controller log file, or the logging level of the data loaders using the client tool as described in Dynamically change the log level of your data loader log files.

To balance data accuracy with computing power, you can change how often the data loaders collect data by changing the frequency of data collection per loader, as described in Change the frequency of data collection. To reduce the amount of unwanted data logged in the database, you can also disable individual data loaders from collecting data, as described in Disable data collection for individual data loaders.

If you edit any plc configuration files, you need to restart the plc service.

If your cluster has PERF controlled by EGO, you can edit the plc_service.xml service configuration file, but you must restart the plc service and EGO on the master host after editing the file. The plc_service.xml file is located in the EGO service directory:

Data purger

The relational database needs to be kept to a reasonable size to maintain optimal efficiency. The data purger manages the database size by purging old data at regular intervals. By default, the data purger purges records older than 14 days at 12:30am every day.

To reschedule the purging of old data, you can change the purger schedule, as described in Change the data purger schedule. To reduce or increase the number or records in the database, you can change the duration of time that records are stored in the database, as described in Change the default record expiry time. If there are specific tables that are containing too much or too little data, you can also change the duration of time that records are stored in each individual table within the database, as described in Change the record expiry time per table.

If you edit any purger configuration files, you need to restart the purger service.

If your cluster has PERF controlled by EGO, you can edit the purger_service.xml service configuration file, but you must restart the purger service and EGO on the master host after editing the file. The purger_service.xml file is located in the EGO service directory:

Derby database

The Derby database uses the derbydb service. You need to restart this service manually whenever you change the Derby database settings. The Derby database is only appropriate for demo clusters. To use the reporting feature to produce regular reports for a production cluster, you must move to a production database using a supported commercial database and disable automatic startup of the derbydb service.

Event data files

The events logger stores event data in event data files. The EGO allocation event data file (for EGO-enabled clusters only) is named ego.stream by default and has a default maximum size of 10MB. The LSF event data file is named lsb.stream by default and has a default maximum size of 100MB. When a data file exceeds this size, the events logger archives the file and creates a new data file.

The events logger only maintains one archive file and overwrites the old archive with the new archive. The default archive file name is ego.stream.0 for EGO and lsb.stream.0 for LSF. The two LSF files are located in LSF_TOP/work/cluster_name/logdir/stream by default, and the two EGO files are located in LSF_TOP/work/cluster_name/ego/data by default. The event data loaders read both the data files and the archive files.

If your system logs a large number of events, you should increase the maximum file size to see more archived event data. If your disk space is insufficient for storing the four files, you should decrease the maximum file size, or change the file path to a location with sufficient storage space. Change the disk usage of your LSF event data files as described in Change the disk usage of LSF event data files or the file path as described in Change the location of the LSF event data files. Change the disk usage or file path of your EGO allocation event data files as described in Change the disk usage of EGO allocation event data files.

You can manage your event data files by editing the system configuration files. Edit ego.conf for the EGO allocation event data file configuration and lsb.params for the LSF event data file configuration.

Administering reports

Determine if your cluster is EGO-enabled and has PERF controlled by EGO

You need to determine whether your cluster is EGO-enabled and has PERF controlled by EGO in order to determine which command you use to manage the reporting services.

  1. In the command console, run egosh service list to see the list of EGO services.
Stop or restart reporting services (PERF controlled by EGO)

Prerequisites: Your cluster must have PERF controlled by EGO.

Stop or restart the derbydb (if you are using the Derby demo database), jobdt, plc, and purger services. If your cluster has PERF controlled by EGO, the reporting services are run as EGO services, and you use the egosh service command to stop or restart these services.

  1. In the command console, stop the service by running egosh service stop.
  2. egosh service stop service_name

  3. If you want to restart the service, run egosh service start.
  4. egosh service start service_name

Stop or restart reporting services (PERF not controlled by EGO)

Prerequisites: Your cluster must have PERF not controlled by EGO.

Stop or restart the derbydb (if you are using the Derby demo database), jobdt, plc, and purger services. If your cluster does not have PERF controlled by EGO, you use the perfadmin command to stop or restart these services.

  1. In the command console, stop the service by running perfadmin stop.
  2. perfadmin stop service_name

  3. If you want to restart the service, run perfadmin start.
  4. perfadmin start service_name

Disable automatic startup of the reporting services

Prerequisites: Your cluster must be EGO-enabled.

When disabling the reporting feature, disable automatic startup of the derbydb (if you are using the Derby demo database), jobdt, plc, and purger services. When moving from the Derby demo database to a production database, disable automatic startup of the derbydb service.

Disable automatic startup of these services by editing their service configuration files (jobdt.xml, plc_service.xml, purger_service.xml, and derby_service.xml for the jobdt, plc, purger, and derbydb services, respectively).

  1. In the command console, open the EGO service directory.
  2. Edit the service configuration file and change the service type from automatic to manual.
  3. In the <sc:StartType> tag, change the text from AUTOMATIC to MANUAL.

  4. Stop the service.
  5. In the command console, restart EGO on the master host to activate these changes.
  6. egosh ego restart master_host_name

View the status of the loader controller

Use the loader controller client tool to view the status of the loader controller.

  1. Launch the loader controller client tool with the -s option.
Dynamically change the log level of your loader controller log file

Use the loader controller client tool to dynamically change the log level of your plc log file if it does not cover enough detail, or covers too much, to suit your needs.

If you restart the plc service, the log level of your plc log file will be set back to the default level. To retain your new log level, change the level of your plc log file as described in Change the log level of your log files.

  1. Launch the loader controller client tool with the -l option.
Dynamically change the log level of your data loader log files

Use the loader controller client tool to dynamically change the log level of your individual data loader log files if they do not cover enough detail, or cover too much, to suit your needs.

If you restart the plc service, the log level of your data loader log files will be set back to the default level. To retain your new log level, change the level of your data loader log files as described in Change the log level of your log files.

  1. If you are using the default configuration file, launch the loader controller client tool with the -n and -l options.
Change the log level of your log files

Change the log level of your log files if they do not cover enough detail, or cover too much, to suit your needs.

  1. Edit the log4j.properties file, located in the reports configuration directory (PERF_CONFDIR).
  2. Navigate to the section representing the service you want to change, or to the default loader configuration if you want to change the log level of the data loaders, and look for the log4j.logger.com.platform.perf. variable.
  3. For example, to change the log level of the data purger log files, navigate to the following section, which is set to the default INFO level:

    # Data purger ("purger") configuration
    log4j.logger.com.platform.perf.purger=INFO, 
    com.platform.perf.purger 
    
  4. Change the log4j.logger.com.platform.perf. variable to the new logging level.
  5. In decreasing level of detail, the valid values are ALL (for all messages), TRACE, DEBUG, INFO, WARN, ERROR, FATAL, and OFF (for no messages). The services or data loaders only log messages of the same or lower level of detail as specified by the log4j.logger.com.platform.perf. variable. Therefore, if you change the log level to ERROR, the service or data loaders will only log ERROR and FATAL messages.

    For example, to change the data purger log files to the ERROR log level:

    # Data purger ("purger") configuration
    log4j.logger.com.platform.perf.purger=ERROR, 
    com.platform.perf.purger 
    
  6. Restart the service that you changed (or the plc service if you changed the data loader log level).
Change the disk usage of LSF event data files

If your system logs a large number of events and you have sufficient disk space, increase the disk space allocated to the LSF event data files.

  1. Edit lsb.params and specify or change the MAX_EVENT_STREAM_SIZE parameter.
  2. MAX_EVENT_STREAM_SIZE = integer 
     

    If unspecified, this is 1024 by default. Change this to the new desired file size in MB.

    The recommended size is 2000 MB.

  3. In the command console, reconfigure the master host to activate this change.
  4. badmin reconfig

Change the location of the LSF event data files

If your system logs a large number of events and your do not have enough disk space, move the LSF event data files to another location.

  1. Edit lsb.params and specify or change the EVENT_STREAM_FILE parameter.
  2. EVENT_STREAM_FILE = file_path 
     

    If unspecified, this is LSF_TOP/work/cluster_name/logdir/stream/lsb.stream by default.

  3. In the command console, reconfigure the master host to activate this change.
  4. badmin reconfig 
    
  5. Restart the plc service on the master host to activate this change.
Change the disk usage of EGO allocation event data files

Prerequisites: Your cluster must be EGO-enabled.

If your system logs a large number of events, increase the disk space allocated to the EGO allocation event data files. If your disk space is insufficient, decrease the space allocated to the EGO allocation event data files or move these files to another location.

  1. Edit ego.conf.
    1. To change the size of each EGO allocation event data file, specify or change the EGO_DATA_MAXSIZE parameter.
    2. EGO_DATA_MAXSIZE = integer 
       

      If unspecified, this is 10 by default. Change this to the new desired file size in MB.

    3. To move the files to another location, specify or change the EGO_DATA_FILE parameter.
    4. EGO_DATA_FILE = file_path 
       

      If unspecified, this is LSF_TOP/work/cluster_name/ego/data/ego.stream by default.

  2. In the command console, restart EGO on the master host to activate this change.
  3. egosh ego restart master_host_name

Change the data purger schedule

Prerequisites: Your cluster must be EGO-enabled.

To reschedule the deletion of old data, change the time in which the data purger deletes the old data.

  1. Edit purger_service.xml in the EGO service directory. .
  2. Navigate to <ego:Command> with the -t parameter in the purger script.
  3. Change the -t parameter in the data purger script to the new time (-t new_time).
  4. You can change the data purger schedule to a specific daily time, or at regular time intervals, in minutes, from when the purger service first starts up.

    For example, to change the schedule of the data purger:

  5. In the command console, restart EGO on the master host to activate these changes.
  6. egosh ego restart master_host_name

  7. Restart the purger service.
Change the data transformer schedule

Prerequisites: Your cluster must be EGO-enabled.

To have reschedule the transformation of data from the relational database to the reporting feature, change the time in which the data transformer converts job data.

  1. Edit jobdt.xml in the EGO service directory.
  2. Navigate to <ego:Command> with the -t parameter in the purger script.
  3. Change the -t parameter in the data transformer script to the new time (-t new_time).
  4. You can change the data transformer schedule to a specific daily time, a specific hourly time, or at regular time intervals, in minutes or hours, from when the jobdt service first starts up.

    For example, to change the schedule of the data transformer:

  5. In the command console, restart EGO on the master host to activate these changes.
  6. egosh ego restart master_host_name

  7. Restart the jobdt service.
Change the default record expiry time

To reduce or increase the number of records stored in the database, change the duration of time that a record is stored in the database before it is purged. This applies to all tables in the database unless you also specify the record expiry time in a particular table.

  1. Edit the purger configuration files for your data loaders.
  2. In the <TableList> tag, edit the Duration attribute to your desired time in days, up to a maximum of 31 days.
  3. For example, to have the records purged after 7 days:

    <TableList Duration="7"> 
     

    By default, the records are purged after 14 days.

  4. Restart the purger service.
Change the record expiry time per table

To reduce or increase the number of records stored in the database for a particular table, change the duration of time that a record is stored in the database per table before it is purged. The duration only applies to this particular table.

  1. Edit the purger configuration files for your data loaders.
  2. Navigate to the specific <Table> tag with the TableName attribute matching the table that you want to change.
  3. For example:

    <Table TableName="RESOURCE_METRICS" 
    TimestampColumn="TIME_STAMP" ... /> 
    
  4. Add or edit the Duration attribute with your desired time in days, up to a maximum of 31 days.
  5. For example, to have the records in this table purged after 10 days:

    <Table TableName="RESOURCE_METRICS" 
    TimestampColumn="TIME_STAMP" Duration="10" ... /> 
    
  6. Restart the purger service.
Change the frequency of data collection

To change how often the data loaders collect data, change the frequency of data collection per loader.

  1. Edit the plc configuration files for your data loaders.
  2. Navigate to the specific <DataLoader> tag with the Name attribute matching the data loader that you want to change.
  3. For example:

    <DataLoader Name="egodynamicresloader" Interval="300" ... /> 
    
  4. Add or edit the Interval attribute with your desired time in seconds.
  5. For example, to have this plug-in collect data every 200 seconds:

    <DataLoader Name="egodynamicresloader" Interval="200" ... /> 
    
  6. Restart the plc service.
Disable data collection for individual data loaders

To reduce unwanted data from being logged in the database, disable data collection for individual data loaders.

  1. Edit the plc configuration files for your data loaders.
  2. Navigate to the specific <DataLoader> tag with the Name attribute matching the data loader that you want to disable.
  3. For example:

    <DataLoader Name="egodynamicresloader" ... Enable="true" .../> 
    
  4. Edit the Enable attribute to "false".
  5. For example, to disable data collection for this plug-in:

    <DataLoader Name="egodynamicresloader" ... Enable="false" ... /> 
    
  6. Restart the plc service.

Test the Reporting Feature

Verify that components of the reporting feature are functioning properly.

  1. Check that the reporting services are running.
  2. Check that there are no error messages in the reporting logs.
    1. View the loader controller log file.
      • UNIX: $PERF_LOGDIR/plc.log.host_name
      • Windows: %PERF_LOGDIR%/plc.log.host_name.txt
    2. Verify that there are no ERROR messages and that, in the DataLoader Statistics section, there are data loader statistics messages for the data loaders in the last hour.
    3. You need to find statistics messages for the following data loaders:

      • bldloader
      • desktopjobdataloader
      • desktopclientdataloader
      • desktopeventloader
      • lsfbhostsloader
      • lsfeventsloader
      • lsfslaloader
      • lsfresproploader
      • sharedresusageloader
      • EGO data loaders (for EGO-enabled clusters only):
    4. egoconsumerresloader
    5. egodynamicresloader
    6. egoeventsloader
    7. egostaticresloader
    8. View the data purger and data loader log files and verify that there are no ERROR messages in these files.
    9. You need to view the following log files (PERF_LOGDIR is LSF_LOGDIR/perf):

      • PERF_LOGDIR/dataloader/bldloader.host_name.log
      • PERF_LOGDIR/dataloader/desktopjobdataloader.host_name.log
      • PERF_LOGDIR/dataloader/desktopclientdataloader.host_name.
        log
      • PERF_LOGDIR/dataloader/desktopeventloader.host_name.log
      • PERF_LOGDIR/jobdt.host_name.log
      • PERF_LOGDIR/dataloader/lsfbhostsloader.host_name.log
      • PERF_LOGDIR/dataloader/lsfeventsloader.host_name.log
      • PERF_LOGDIR/dataloader/lsfslaloader.host_name.log
      • PERF_LOGDIR/purger.host_name.log
      • PERF_LOGDIR/dataloader/lsfresproploader.host_name.log
      • PERF_LOGDIR/dataloader/sharedresusageloader.host_name.log
      • EGO data loader log files (EGO-enabled clusters only):
    10. PERF_LOGDIR/dataloader/egoconsumerresloader.host_name.log
    11. PERF_LOGDIR/dataloader/egodynamicresloader.host_name.log
    12. PERF_LOGDIR/dataloader/egoeventsloader.host_name.log
    13. PERF_LOGDIR/dataloader/egostaticresloader.host_name.log
  3. Check the report output.
    1. Produce a standard report.
    2. Verify that the standard report produces a chart or table with data for your cluster.

Postrequisites: If you were not able to verify that these components are functioning properly, identify the cause of these problems and correct them.

Disable the Reporting Feature

Prerequisites: You must have root or lsfadmin access in the master host.

  1. Disable the LSF events data logging.
    1. Define or edit the ENABLE_EVENT_STREAM parameter in the lsb.params file to disable event streaming.
    2. ENABLE_EVENT_STREAM = N 
      
    3. In the command console, reconfigure the master host to activate these changes.
    4. badmin reconfig

  2. If your cluster is EGO-enabled, disable the EGO allocation events data logging.
    1. Define or edit the EGO_DATA_ENABLE parameter in the ego.conf file to disable data logging.
    2. EGO_DATA_ENABLE = N 
      
    3. In the command console, restart EGO on the master host to activate these changes.
    4. egosh ego restart master_host_name

  3. Stop the reporting services.
  4. Stop the derbydb (if you are using the Derby demo database), jobdt, plc, and purger services.

  5. Disable automatic startup of the derbydb (if you are using the Derby demo database), jobdt, plc, and purger services .

Move to a Production Database

Move the reporting feature to a production database.

Prerequisites: The commercial database is properly configured and running:

The Derby demo database is not supported for any production clusters. To produce regular reports for a production cluster, you must use a supported commercial database. The reporting feature supports Oracle 9i, Oracle 10g, and MySQL 5.x databases.

All data in the demo database will not be available in the production database. Some of your custom reports may not be compatible with the production database if you used non-standard SQL code.

  1. Create a database schema for your commercial database.
  2. Stop the reporting services.
  3. Stop the derbydb (if you are using the Derby demo database), jobdt, plc, and purger services.

  4. If you are using the Derby demo database, disable automatic startup of the derbydb service.
  5. If you are in UNIX, copy the Oracle JDBC driver into the PERF and GUI library directories.
  6. You need to copy the Oracle JDBC driver to the following directories:

  7. Configure your database connection.
  8. Restart the reporting services.
  9. Restart the jobdt, plc, and purger services.

  10. If your cluster is EGO-enabled, restart the HPC Portal.
  11. note:  
    The HPC Portal will be unavailable during this step.
    1. In the command console, stop the WEBGUI service.
    2. egosh service stop WEBGUI

    3. Restart the WEBGUI service.
    4. egosh service start WEBGUI

The report data will now be loaded into the production database and the Console will use the data in this database.

Create an Oracle database schema

Prerequisites: The Oracle database is properly configured and running:

Create a MySQL database schema

Prerequisites: The MySQL database is properly configured and running:

Configure the database connection

Prerequisites: You have a user name, password, and URL to access the database.

Launch the database configuration tool to configure your database connection.

  1. If you connected to the UNIX host via telnet and are running xserver on a local host, set your display environment.
  2. Test your display by running xclock or another X-Windows application.

    If the application displays, your display environment is already set correctly; otherwise, you need to set your display environment.

  3. Launch the database configuration tool.
  4. In the User ID and Password fields, specify the user account name and password with which to connect to the database and to create your database tablespaces.
  5. This user account must have been defined in your database application, and must have read and write access to the database tables.

  6. In the JDBC driver field, select the driver for your commercial database.
  7. In the JDBC URL field, enter the URL for your database.
  8. This should be similar to the format given in Example URL format.

  9. In the Maximum connections field, specify the maximum allowed number of concurrent connections to the database server.
  10. This is the maximum number of users who can produce reports at the same time.


Platform Computing Inc.
www.platform.com
Knowledge Center         Contents    Previous  Next    Index