Knowledge Center Contents Previous Next Index |
Reporting
Reporting is a feature of Platform LSF. It allows you to look at the overall statistics of your entire cluster. You can analyze the history of hosts, resources, and workload in your cluster to get an overall picture of your cluster's performance.
Contents
- Introduction to Reporting
- Getting Started with Standard Reports
- Custom Reports
- System Description
- Reports Administration
- Test the Reporting Feature
- Disable the Reporting Feature
- Move to a Production Database
Introduction to Reporting
An efficient cluster maximizes the usage of resources while minimizing the average wait time of workload. To ensure that your cluster is running efficiently at all times, you need to analyze the activity within your cluster to see if there are any areas for improvement.
The reporting feature uses the data loader controller service, the job data transformer service, and the data purger service to collect data from the cluster, and to maintain this data in a relational database system. The reporting feature collects the cluster data from a relational database system and displays it in reports either graphically or in tables. You can use these reports to analyze and improve the performance of your cluster, and to troubleshoot configuration problems.
You can access the reporting feature from the HPC Portal.
Standard and custom reports
Platform has provided a set of standard reports to allow you to immediately analyze your cluster without having to create any new reports. These standard reports provide the most common and useful data to analyze your cluster.
You may also create custom reports to perform advanced queries and reports beyond the data produced in the standard reports.
The database
The reporting feature optionally includes the Apache Derby database, a JDBC-based relational database system. The Derby database is a small-footprint, open source database, and is only appropriate for demo clusters. If you want to use the reporting feature to produce regular reports for a production cluster, you must use a supported commercial database.
The reporting feature supports Oracle 9i, Oracle 10g, and MySQL 5.x databases.
important:
The Apache Derby database is not supported for any production clusters.Getting Started with Standard Reports
For your convenience, Platform has provided several standard reports for you to use. These reports allow you to keep track of some useful statistics in your cluster.
Standard reports overview
Standard reports are based on raw data stored in the relational database, and do not perform any data aggregation or calculations.
The following is a list of the standard reports that are included with the reporting feature. For further details on a report, open its full description as described in View the full description of a report.
Table 4: Standard reportsView the full description of a report
- In the Console, navigate to
Reports
, thenStandard Reports
.- Click the name of your report to open it.
- Click
Report properties
.What can I do with standard reports?
Producing reports
The reports stored in the system do not include actual data. Instead, the reports define what data to extract from the system, and how to display it graphically.
Reports need to be produced before you can see the data. When you produce a report, you query the database and extract specific data. The amount of system overhead depends on how much data is in the report.
Standard reports have configurable parameters so you can modify the report and get exactly the data that you want.
Exporting reports
Data expires from the database periodically, so producing a report at a later date may return different data, or return no output at all. After you produce a report, you can keep your results by exporting the report data as comma-separated values in a CSV file. In this way you can preserve your data outside the system and integrate it with external programs, such as a spreadsheet. You can also keep your graphical results by using your browser to save the report results as an image.
Produce a standard report
- In the Console, navigate to
Reports
, thenStandard Reports
.- Click the name of your report to open it.
- Set the report parameters as desired. Default settings are shown, but you can modify them to suit your needs.
- Click
Produce Report
.After a short time, the resulting data is displayed graphically.
When you close the report window, you lose the contents of the report unless you export it first.
Export report data
Once you produce a report, exporting is the best way to save the data for future use. You cannot produce the same report at a later date if the data has expired from the database.
- In the Console, produce and view your report.
- Click
Export Report Data
.- In the browser dialog, specify the output path and name the exported file.
In the
Save as type
field, specify "CSV".Custom Reports
You can create and use custom reports if the standard reports are insufficient for your needs.
What are custom reports?
While standard reports are provided for your use by Platform, custom reports are reports you create as needed to satisfy specific reporting needs at your site.
Custom reports let you define combinations of data that are not available in the standard reports. Custom report output is always displayed in tabular format.
What can I do with custom reports?
Creating reports
The easiest way to create a custom report is to copy an existing report, then customize the SQL query string as desired. To customize the SQL query string, you may need to refer to the data schema, which describes the organization of information in the relational database. The data schema for each standard report is available in the Console by opening the report and clicking
Help
.Even if you cannot edit SQL, saving a report as a custom report lets you re-use the report data without having to re-input the parameters in the standard report.
- If the time period is fixed, you get the same data every time you produce the report, but the report will be empty when the data expires from the database.
- If the time period is relative, you can get data for a different time period each time you produce the report.
You can also define custom reports from a blank template and input the SQL query string directly.
When you create custom reports, you can enter a category and use it to group the reports any way you want.
Deleting reports
Unlike standard reports, custom reports can be deleted. You might prefer to rename old reports (by modifying them) instead of deleting them.
Using reports
You produce custom reports and export the data in the same way as standard reports.
Data expires from the database periodically, so producing a report at a later date may return different data, or return no output at all. After you produce a report, you can keep your results by exporting the report data as comma-separated values in a CSV file. In this way you can preserve your data outside the system and integrate it with external programs, such as a spreadsheet. You can also keep your graphical results by using your browser to save the report results as an image.
If you ever want to modify parameters of a custom report, you must edit the SQL query string directly.
Create a custom report from an existing report
This method is convenient because you can extend an existing report. Examine your current standard and custom reports and select one with similar data sources or output to the new report that you want to create.
- In the Console, select the report that you want to copy, with all the parameters configured as you wish to copy them.
- Click
Copy to New Custom Report
.- Edit the report properties and query string as desired.
- In the
Report properties
section, you should give the new report a unique name. You can also modify the report summary, description, and category.- In the
Report query
section, you can modify the SQL query directly.To edit the SQL query, you will need to know about the data schema of the database. For further information on the data schema, refer to
Platform LSF Reports Data Schema
in the Platform LSF Knowledge Center.- To validate your SQL query string and ensure that your report delivers the appropriate results, click
Produce Report
.This will actually produce the report, so you might want to limit your testing to a small set of data.
You can continue to edit your SQL query string and test the results of your report until you are ready to save it.
- To finish, click
Create
.To access your new custom report, navigate to
Reports
thenCustom Reports
.Create a new custom report
Prerequisites: You must be able to construct valid query strings with Structured Query Language (SQL).
- In the Console, navigate to
Reports
thenCustom Reports
.- Select
Global Actions > Create Custom Report
.- Define the report properties and query string as desired.
- In the
Report properties
section, specify the report name, summary, description, and category.- In the
Report query
section, input your SQL query string.For further information on the data schema, refer to
Platform LSF Reports Data Schema
in the Platform LSF Knowledge Center.- To validate your SQL query string and ensure that your report delivers the appropriate results, click
Produce Report
.This will actually produce the report, so you might want to limit your testing to a small set of data.
You can continue to edit your SQL query string and test the results of your report until you are ready to save it.
- To finish, click
Create
.To access your new custom report, navigate to
Reports
thenCustom Reports
.Modify a custom report
- In the Console, navigate to
Reports
thenCustom Reports
.- Click the name of your report.
- Modify the report properties and query string as desired.
- Edit the report properties and SQL query string.
For further information on the data schema, refer to
Platform LSF Reports Data Schema
in the Platform LSF Knowledge Center.- To validate your SQL query string and ensure that your report delivers the appropriate results, click
Produce Report
.This will actually produce the report, so you might want to limit your testing to a small set of data.
You can continue to edit your SQL query string and test the results of your report until you are ready to save it.
- To confirm your changes, click
Save
.Produce a custom report
- In the Console, navigate to
Reports
thenCustom Reports
.- Click the name of your report to open it.
- Click
Produce Report
.After a short time, the resulting data is displayed in tabular format.
When you close the report window, you will lose the contents of the report unless you export it first.
Export report data
Once you produce a report, exporting is the best way to save the data for future use. You cannot produce the same report at a later date if the data has expired from the database.
- In the Console, produce and view your report.
- Click
Export Report Data
.- In the browser dialog, specify the output path and name the exported file.
In the
Save as type
field, specify "CSV".Delete a custom report
- In the Console, navigate to
Reports
thenCustom Reports
.- Locate your report in the list.
- Select
Actions > Delete Report
.System Description
The reporting feature is built on top of the Platform Enterprise Reporting Framework (PERF) architecture. This architecture defines the communication between your EGO cluster, relational database, and data sources via the PERF Loader Controller (PLC). The loader controller is the module that controls multiple loaders for data collection.
PERF architecture
The following diagram illustrates the PERF architecture as it relates to your cluster, reporting services, relational database, and data loaders.
Data loaders
The reporting feature collects cluster operation data using data loaders to load data into tables in a relational database. The data loaders connect to the database using a JDBC driver. The data loaders handle daylight savings automatically by using GMT time when collecting data.
Default data loaders
The following are lists of the data loaders and default behavior:
Table 5: LSF data loadersTable 6: EGO data loadersSystem services
The reporting feature has system services, including the Derby service if you are running a demo database. If your cluster has PERF controlled by EGO, these service are run as EGO services. Each service uses one slot on a management host.
Loader controller
The loader controller service (
plc
) controls the data loaders that collect data from the system and writes the data into the database.Data purger
The data purger service (
purger
) maintains the size of the database by purging old records from the database. By default, the data purger purges all data that is older than 14 days, and purges data every day at 12:30am.Job data transformer
The job data transformer service (
jobdt
) converts raw job data in the relational database into a format usable by the reporting feature. By default, the job data transformer converts job data every hour at thirty minutes past the hour (that is, at 12:30am, 1:30am, and so on throughout the day).Derby database
If you are running a demo database, the Derby database (
derbydb
) stores the cluster data. When using a supported commercial database, the Derby database service no longer runs as an EGO service.Reports Administration
What do I need to know?
Reports directories
The reporting feature resides in various
perf
subdirectories within the LSF directory structure. This document usesLSF_TOP
to refer to the top-level LSF installation directory. The reporting feature directories include the following:Table 7: LSF reporting directory environment variables in UNIXTable 8: LSF reporting directory environment variables in WindowsReporting services
The reporting feature uses the following services.
- Job data transformer (
jobdt
)- Loader controller (
plc
)- Data purger (
purger
)The Derby demo database uses the
derbydb
service.If your cluster has PERF controlled by EGO, these services are run as EGO services.
You need to stop and restart a service after editing its configuration files. If you are disabling the reporting feature, you need to disable automatic startup of these services, as described in Disable automatic startup of the reporting services.
Log files for these services are available in the
PERF_LOGDIR
directory. There are seven logging levels that determine the detail of messages recorded in the log files. In decreasing level of detail, these areALL
(all messages),TRACE
,DEBUG
,INFO
,WARN
,ERROR
,FATAL
, andOFF
(no messages). By default, all service log files log messages ofINFO
level or higher (that is, allINFO
,WARN
,ERROR
, andFATAL
messages). You can change the logging level of theplc
service using the loader controller client tool as described in Dynamically change the log level of your loader controller log file, or the logging level of the other services as described in Change the log level of your log files.Job data transformer
The job data is logged in the relational database in a raw format. At regular intervals, the job data transformer converts this data to a format usable by the reporting feature. By default, the data transformer converts the job data every hour at thirty minutes past the hour (that is, at 12:30am, 1:30am, and so on throughout the day).
To reschedule the transformation of data from the relational database to the reporting feature, you can change the data transformer schedule as described in Change the data transformer schedule.
If your cluster has PERF controlled by EGO, you can edit the
jobdt.xml
configuration file, but you need to restart thejobdt
service and EGO on the master host after editing the file. Thejobdt.xml
file is located in the EGO service directory:
- UNIX:
LSF_CONFDIR
/ego/
cluster_name
/eservice/esc/conf/services
- Windows:
LSF_CONFDIR
\ego\
cluster_name
\eservice\esc\conf\services
Loader controller
The loader controller manages the data loaders. By default, the loader controller manages the following data loaders:
bldloader
(License Scheduler data loader)desktopjobdataloader
(Desktop job data loader)desktopclientdataloader
(Desktop client data loader)desktopeventloader
(Desktop active event data loader)egoconsumerresloader
(consumer resource data loader)egodynamicresloader
(dynamic metric data loader)egoeventsloader
(EGO allocation events data loader)egostaticresloader
(static attribute data loader)lsfbhostsloader
(bhosts data loader)lsfeventsloader
(LSF events data loader)lsfslaloader
(SLA data loader)lsfresproploader
(LSF resource properties data loader)sharedresusageloader
(share resource usage data loader)You can view the status of the loader controller service using the loader controller client tool as described in View the status of the loader controller.
Log files for the loader controller and data loaders are available in the
PERF_LOGDIR
directory. There are logging levels that determine the detail of messages recorded in the log files. In decreasing level of detail, these areALL
(all messages),TRACE
,DEBUG
,INFO
,WARN
,ERROR
,FATAL
, andOFF
(no messages). By default, all service log files log messages ofINFO
level or higher (that is, allINFO
,WARN
,ERROR
, andFATAL
messages). You can change the logging level of theplc
service using the loader controller client tool as described in Dynamically change the log level of your loader controller log file, or the logging level of the data loaders using the client tool as described in Dynamically change the log level of your data loader log files.To balance data accuracy with computing power, you can change how often the data loaders collect data by changing the frequency of data collection per loader, as described in Change the frequency of data collection. To reduce the amount of unwanted data logged in the database, you can also disable individual data loaders from collecting data, as described in Disable data collection for individual data loaders.
If you edit any
plc
configuration files, you need to restart theplc
service.If your cluster has PERF controlled by EGO, you can edit the
plc_service.xml
service configuration file, but you must restart theplc
service and EGO on the master host after editing the file. Theplc_service.xml
file is located in the EGO service directory:
- UNIX:
LSF_CONFDIR
/ego/
cluster_name
/eservice/esc/conf/services
- Windows:
LSF_CONFDIR
\ego\
cluster_name
\eservice\esc\conf\services
Data purger
The relational database needs to be kept to a reasonable size to maintain optimal efficiency. The data purger manages the database size by purging old data at regular intervals. By default, the data purger purges records older than 14 days at 12:30am every day.
To reschedule the purging of old data, you can change the purger schedule, as described in Change the data purger schedule. To reduce or increase the number or records in the database, you can change the duration of time that records are stored in the database, as described in Change the default record expiry time. If there are specific tables that are containing too much or too little data, you can also change the duration of time that records are stored in each individual table within the database, as described in Change the record expiry time per table.
If you edit any
purger
configuration files, you need to restart thepurger
service.If your cluster has PERF controlled by EGO, you can edit the
purger_service.xml
service configuration file, but you must restart thepurger
service and EGO on the master host after editing the file. Thepurger_service.xml
file is located in the EGO service directory:
- UNIX:
LSF_CONFDIR
/ego/
cluster_name
/eservice/esc/conf/services
- Windows:
LSF_CONFDIR
\ego\
cluster_name
\eservice\esc\conf\services
Derby database
The Derby database uses the
derbydb
service. You need to restart this service manually whenever you change the Derby database settings. The Derby database is only appropriate for demo clusters. To use the reporting feature to produce regular reports for a production cluster, you must move to a production database using a supported commercial database and disable automatic startup of thederbydb
service.Event data files
The events logger stores event data in event data files. The EGO allocation event data file (for EGO-enabled clusters only) is named
ego.stream
by default and has a default maximum size of 10MB. The LSF event data file is namedlsb.stream
by default and has a default maximum size of 100MB. When a data file exceeds this size, the events logger archives the file and creates a new data file.The events logger only maintains one archive file and overwrites the old archive with the new archive. The default archive file name is
ego.stream.0
for EGO andlsb.stream.0
for LSF. The two LSF files are located inLSF_TOP
/work/
cluster_name
/logdir/stream
by default, and the two EGO files are located inLSF_TOP
/work/
cluster_name
/ego/data
by default. The event data loaders read both the data files and the archive files.If your system logs a large number of events, you should increase the maximum file size to see more archived event data. If your disk space is insufficient for storing the four files, you should decrease the maximum file size, or change the file path to a location with sufficient storage space. Change the disk usage of your LSF event data files as described in Change the disk usage of LSF event data files or the file path as described in Change the location of the LSF event data files. Change the disk usage or file path of your EGO allocation event data files as described in Change the disk usage of EGO allocation event data files.
You can manage your event data files by editing the system configuration files. Edit
ego.conf
for the EGO allocation event data file configuration andlsb.params
for the LSF event data file configuration.Administering reports
Determine if your cluster is EGO-enabled and has PERF controlled by EGO
You need to determine whether your cluster is EGO-enabled and has PERF controlled by EGO in order to determine which command you use to manage the reporting services.
- In the command console, run
egosh service list
to see the list of EGO services.
- If you see a list of services showing that the reporting services are
STARTED
, your cluster is EGO-enabled and has PERF controlled by EGO. The reporting services are run as EGO services, and you useegosh service
to manage the reporting services.- If you see a list of services showing that the reporting services are not
STARTED
, your cluster is EGO-enabled but does not have PERF controlled by EGO. You useperfadmin
to manage the reporting services.- If you get an error running
egosh service list
, your cluster is not EGO-enabled and therefore does not have PERF controlled by EGO. You useperfadmin
to manage the reporting services.Stop or restart reporting services (PERF controlled by EGO)
Prerequisites: Your cluster must have PERF controlled by EGO.
Stop or restart the
derbydb
(if you are using the Derby demo database),jobdt
,plc
, andpurger
services. If your cluster has PERF controlled by EGO, the reporting services are run as EGO services, and you use theegosh service
command to stop or restart these services.
- In the command console, stop the service by running
egosh service stop
.
egosh service stop
service_name
- If you want to restart the service, run
egosh service start
.
egosh service start
service_name
Stop or restart reporting services (PERF not controlled by EGO)
Prerequisites: Your cluster must have PERF not controlled by EGO.
Stop or restart the
derbydb
(if you are using the Derby demo database),jobdt
,plc
, andpurger
services. If your cluster does not have PERF controlled by EGO, you use theperfadmin
command to stop or restart these services.
- In the command console, stop the service by running
perfadmin stop
.
perfadmin stop
service_name
- If you want to restart the service, run
perfadmin start
.
perfadmin start
service_name
Disable automatic startup of the reporting services
Prerequisites: Your cluster must be EGO-enabled.
When disabling the reporting feature, disable automatic startup of the
derbydb
(if you are using the Derby demo database),jobdt
,plc
, andpurger
services. When moving from the Derby demo database to a production database, disable automatic startup of thederbydb
service.Disable automatic startup of these services by editing their service configuration files (
jobdt.xml
,plc_service.xml
,purger_service.xml
, andderby_service.xml
for thejobdt
,plc
,purger
, andderbydb
services, respectively).
- In the command console, open the EGO service directory.
- UNIX:
cd
LSF_CONFDIR
/ego/
cluster_name
/eservice/esc/conf/services
- Windows:
cd
LSF_CONFDIR
\ego\
cluster_name
\eservice\esc\conf\services
- Edit the service configuration file and change the service type from automatic to manual.
In the
<sc:StartType>
tag, change the text fromAUTOMATIC
toMANUAL
.- Stop the service.
- In the command console, restart EGO on the master host to activate these changes.
egosh ego restart
master_host_name
View the status of the loader controller
Use the loader controller client tool to view the status of the loader controller.
- Launch the loader controller client tool with the
-s
option.
- In UNIX, run
$PERF_TOP/
version
/bin/plcclient.sh -s
.- In Windows, run
%PERF_TOP%\
version
\bin\plcclient -s
.Dynamically change the log level of your loader controller log file
Use the loader controller client tool to dynamically change the log level of your
plc
log file if it does not cover enough detail, or covers too much, to suit your needs.If you restart the
plc
service, the log level of yourplc
log file will be set back to the default level. To retain your new log level, change the level of yourplc
log file as described in Change the log level of your log files.
- Launch the loader controller client tool with the
-l
option.
- In UNIX, run
$PERF_TOP/bin/plcclient.sh -l
log_level
.- In Windows, run
$PERF_TOP\bin\plcclient -l
log_level
.In decreasing level of detail, the log levels are
ALL
(for all messages),TRACE
,DEBUG
,INFO
,WARN
,ERROR
,FATAL
, andOFF
(for no messages).Dynamically change the log level of your data loader log files
Use the loader controller client tool to dynamically change the log level of your individual data loader log files if they do not cover enough detail, or cover too much, to suit your needs.
If you restart the
plc
service, the log level of your data loader log files will be set back to the default level. To retain your new log level, change the level of your data loader log files as described in Change the log level of your log files.
- If you are using the default configuration file, launch the loader controller client tool with the
-n
and-l
options.
- In UNIX, run
$PERF_TOP/
version
/bin/plcclient.sh -n
data_loader_name
-l
log_level
.- In Windows, run
%PERF_TOP%\
version
\bin\plcclient -n
data_loader_name
-l
log_level
.Refer to Loader controller for a list of the data loader names.
In decreasing level of detail, the log levels are
ALL
(for all messages),TRACE
,DEBUG
,INFO
,WARN
,ERROR
,FATAL
, andOFF
(for no messages).Change the log level of your log files
Change the log level of your log files if they do not cover enough detail, or cover too much, to suit your needs.
- Edit the
log4j.properties
file, located in the reports configuration directory (PERF_CONFDIR
).- Navigate to the section representing the service you want to change, or to the default loader configuration if you want to change the log level of the data loaders, and look for the
log4j.logger.com.platform.perf.
variable.For example, to change the log level of the data purger log files, navigate to the following section, which is set to the default
INFO
level:# Data purger ("purger") configuration log4j.logger.com.platform.perf.purger=INFO, com.platform.perf.purger- Change the
log4j.logger.com.platform.perf.
variable to the new logging level.In decreasing level of detail, the valid values are
ALL
(for all messages),TRACE
,DEBUG
,INFO
,WARN
,ERROR
,FATAL
, andOFF
(for no messages). The services or data loaders only log messages of the same or lower level of detail as specified by thelog4j.logger.com.platform.perf.
variable. Therefore, if you change the log level toERROR
, the service or data loaders will only logERROR
andFATAL
messages.For example, to change the data purger log files to the
ERROR
log level:# Data purger ("purger") configuration log4j.logger.com.platform.perf.purger=ERROR, com.platform.perf.purger- Restart the service that you changed (or the
plc
service if you changed the data loader log level).Change the disk usage of LSF event data files
If your system logs a large number of events and you have sufficient disk space, increase the disk space allocated to the LSF event data files.
- Edit
lsb.params
and specify or change theMAX_EVENT_STREAM_SIZE
parameter.MAX_EVENT_STREAM_SIZE =
integer
If unspecified, this is 1024 by default. Change this to the new desired file size in MB.
The recommended size is 2000 MB.
- In the command console, reconfigure the master host to activate this change.
badmin reconfig
Change the location of the LSF event data files
If your system logs a large number of events and your do not have enough disk space, move the LSF event data files to another location.
- Edit
lsb.params
and specify or change theEVENT_STREAM_FILE
parameter.EVENT_STREAM_FILE =
file_path
If unspecified, this is
LSF_TOP
/work/
cluster_name
/logdir/stream/lsb.stream
by default.- In the command console, reconfigure the master host to activate this change.
badmin reconfig
- Restart the
plc
service on the master host to activate this change.Change the disk usage of EGO allocation event data files
Prerequisites: Your cluster must be EGO-enabled.
If your system logs a large number of events, increase the disk space allocated to the EGO allocation event data files. If your disk space is insufficient, decrease the space allocated to the EGO allocation event data files or move these files to another location.
- Edit
ego.conf
.
- To change the size of each EGO allocation event data file, specify or change the
EGO_DATA_MAXSIZE
parameter.EGO_DATA_MAXSIZE =
integer
If unspecified, this is
10
by default. Change this to the new desired file size in MB.- To move the files to another location, specify or change the
EGO_DATA_FILE
parameter.EGO_DATA_FILE =
file_path
If unspecified, this is
LSF_TOP
/work/
cluster_name
/ego/data/ego.stream
by default.- In the command console, restart EGO on the master host to activate this change.
egosh ego restart master_host_name
Change the data purger schedule
Prerequisites: Your cluster must be EGO-enabled.
To reschedule the deletion of old data, change the time in which the data purger deletes the old data.
- Edit
purger_service.xml
in the EGO service directory. .
- UNIX:
LSF_CONFDIR
/ego/
cluster_name
/eservice/esc/conf/services
- Windows:
LSF_CONFDIR
\ego\
cluster_name
\eservice\esc\conf\services
- Navigate to
<ego:Command>
with the-t
parameter in the purger script.
- In UNIX, this is
<ego:Command> ...purger.sh -t ...
- In Windows, this is
<ego:Command> ...purger.bat -t ...
By default, the data purger is scheduled to delete old data at 12:30am every day.
- Change the
-t
parameter in the data purger script to the new time (-t
new_time
).You can change the data purger schedule to a specific daily time, or at regular time intervals, in minutes, from when the
purger
service first starts up.For example, to change the schedule of the data purger:
- To delete old data at 11:15pm every day:
<ego:Command> ...purger... -t 23:15- To delete old data every 12 hours from when the
purger
service first starts up:<ego:Command> ...purger... -t *[12]- In the command console, restart EGO on the master host to activate these changes.
egosh ego restart
master_host_name
- Restart the
purger
service.Change the data transformer schedule
Prerequisites: Your cluster must be EGO-enabled.
To have reschedule the transformation of data from the relational database to the reporting feature, change the time in which the data transformer converts job data.
- Edit
jobdt.xml
in the EGO service directory.
- UNIX:
LSF_CONFDIR
/ego/
cluster_name
/eservice/esc/conf/services
- Windows:
LSF_CONFDIR
\ego\
cluster_name
\eservice\esc\conf\services
- Navigate to
<ego:Command>
with the-t
parameter in the purger script.
- In UNIX, this is
<ego:Command> ...jobdt.sh -t ...
- In Windows, this is
<ego:Command> ...jobdt.bat -t ...
By default, the data transformer converts the job data every hour at thirty minutes past the hour (that is, at 12:30am, 1:30am, and so on throughout the day).
- Change the
-t
parameter in the data transformer script to the new time (-t
new_time
).You can change the data transformer schedule to a specific daily time, a specific hourly time, or at regular time intervals, in minutes or hours, from when the
jobdt
service first starts up.For example, to change the schedule of the data transformer:
- To convert job data at 10:20pm every day:
<ego:Command> ...jobdt... -t 22:20- To convert job data at the 25th minute of every hour:
<ego:Command> ...jobdt... -t *:25- To convert job data every fifteen minutes from when the
jobdt
service first starts up:<ego:Command> ...jobdt... -t *:*[15]- To convert job data every two hours from when the
jobdt
service first starts up:<ego:Command> ...jobdt... -t *[2]- In the command console, restart EGO on the master host to activate these changes.
egosh ego restart
master_host_name
- Restart the
jobdt
service.Change the default record expiry time
To reduce or increase the number of records stored in the database, change the duration of time that a record is stored in the database before it is purged. This applies to all tables in the database unless you also specify the record expiry time in a particular table.
- Edit the
purger
configuration files for your data loaders.
- For EGO data loaders, edit
purger_ego_rawdata.xml
.- For LSF data loaders, edit
purger_lsf_basic_rawdata.xml
.The
purger
configuration files are located in thepurger
subdirectory of the reports configuration directory:
- UNIX:
$PERF_CONFDIR/purger
- Windows:
%PERF_CONFDIR%\purger
- In the
<TableList>
tag, edit theDuration
attribute to your desired time in days, up to a maximum of 31 days.For example, to have the records purged after 7 days:
<TableList Duration="7">By default, the records are purged after 14 days.
- Restart the
purger
service.Change the record expiry time per table
To reduce or increase the number of records stored in the database for a particular table, change the duration of time that a record is stored in the database per table before it is purged. The duration only applies to this particular table.
- Edit the
purger
configuration files for your data loaders.
- For EGO data loaders, edit
purger_ego_rawdata.xml
.- For LSF data loaders, edit
purger_lsf_basic_rawdata.xml
.The
purger
configuration files are located in thepurger
subdirectory of the reports configuration directory:
- UNIX:
$PERF_CONFDIR/purger
- Windows:
%PERF_CONFDIR%\purger
- Navigate to the specific
<Table>
tag with theTableName
attribute matching the table that you want to change.For example:
<Table TableName="RESOURCE_METRICS" TimestampColumn="TIME_STAMP" ... />- Add or edit the
Duration
attribute with your desired time in days, up to a maximum of 31 days.For example, to have the records in this table purged after 10 days:
<Table TableName="RESOURCE_METRICS" TimestampColumn="TIME_STAMP" Duration="10" ... />- Restart the
purger
service.Change the frequency of data collection
To change how often the data loaders collect data, change the frequency of data collection per loader.
- Edit the
plc
configuration files for your data loaders.
- For EGO data loaders, edit
plc_ego_rawdata.xml
.- For LSF data loaders, edit
plc_lsf_basic_rawdata.xml
.The
plc
configuration files are located in theplc
subdirectory of the reports configuration directory:
- UNIX:
$PERF_CONFDIR/plc
- Windows:
%PERF_CONFDIR%\plc
- Navigate to the specific
<DataLoader>
tag with theName
attribute matching the data loader that you want to change.For example:
<DataLoader Name="egodynamicresloader" Interval="300" ... />- Add or edit the
Interval
attribute with your desired time in seconds.For example, to have this plug-in collect data every 200 seconds:
<DataLoader Name="egodynamicresloader" Interval="200" ... />- Restart the
plc
service.Disable data collection for individual data loaders
To reduce unwanted data from being logged in the database, disable data collection for individual data loaders.
- Edit the
plc
configuration files for your data loaders.
- For EGO data loaders, edit
plc_ego_rawdata.xml
.- For LSF data loaders, edit
plc_lsf_basic_rawdata.xml
.The
plc
configuration files are located in theplc
subdirectory of the reports configuration directory:
- UNIX:
$PERF_CONFDIR/plc
- Windows:
%PERF_CONFDIR%\plc
- Navigate to the specific
<DataLoader>
tag with theName
attribute matching the data loader that you want to disable.For example:
<DataLoader Name="egodynamicresloader" ... Enable="true" .../>- Edit the
Enable
attribute to"false"
.For example, to disable data collection for this plug-in:
<DataLoader Name="egodynamicresloader" ... Enable="false" ... />- Restart the
plc
service.Test the Reporting Feature
Verify that components of the reporting feature are functioning properly.
- Check that the reporting services are running.
- If your cluster has PERF controlled by EGO, run
egosh service list
.- If your cluster has PERF not controlled by EGO, run
perfadmin list
.- Check that there are no error messages in the reporting logs.
- View the loader controller log file.
- UNIX:
$PERF_LOGDIR/plc.log.
host_name
- Windows:
%PERF_LOGDIR%/plc.log.
host_name
.txt
- Verify that there are no
ERROR
messages and that, in theDataLoader Statistics
section, there are data loader statistics messages for the data loaders in the last hour.You need to find statistics messages for the following data loaders:
bldloader
desktopjobdataloader
desktopclientdataloader
desktopeventloader
lsfbhostsloader
lsfeventsloader
lsfslaloader
lsfresproploader
sharedresusageloader
- EGO data loaders (for EGO-enabled clusters only):
egoconsumerresloader
egodynamicresloader
egoeventsloader
egostaticresloader
- View the data purger and data loader log files and verify that there are no
ERROR
messages in these files.You need to view the following log files (
PERF_LOGDIR
isLSF_LOGDIR
/perf
):
PERF_LOGDIR
/dataloader/bldloader.
host_name
.log
PERF_LOGDIR
/dataloader/desktopjobdataloader.
host_name
.log
PERF_LOGDIR
/dataloader/desktopclientdataloader.
host_name
.
logPERF_LOGDIR
/dataloader/desktopeventloader.
host_name
.log
PERF_LOGDIR
/jobdt.
host_name
.log
PERF_LOGDIR
/dataloader/lsfbhostsloader.
host_name
.log
PERF_LOGDIR
/dataloader/lsfeventsloader.
host_name
.log
PERF_LOGDIR
/dataloader/lsfslaloader.
host_name
.log
PERF_LOGDIR
/purger.
host_name
.log
PERF_LOGDIR
/dataloader/lsfresproploader.
host_name
.log
PERF_LOGDIR
/dataloader/sharedresusageloader.
host_name
.log
- EGO data loader log files (EGO-enabled clusters only):
PERF_LOGDIR
/dataloader/egoconsumerresloader.
host_name
.log
PERF_LOGDIR
/dataloader/egodynamicresloader.
host_name
.log
PERF_LOGDIR
/dataloader/egoeventsloader.
host_name
.log
PERF_LOGDIR
/dataloader/egostaticresloader.
host_name
.log
- Check the report output.
- Produce a standard report.
- Verify that the standard report produces a chart or table with data for your cluster.
Postrequisites: If you were not able to verify that these components are functioning properly, identify the cause of these problems and correct them.
Disable the Reporting Feature
Prerequisites: You must have
root
orlsfadmin
access in the master host.
- Disable the LSF events data logging.
- Define or edit the
ENABLE_EVENT_STREAM
parameter in thelsb.params
file to disable event streaming.ENABLE_EVENT_STREAM = N- In the command console, reconfigure the master host to activate these changes.
badmin reconfig
- If your cluster is EGO-enabled, disable the EGO allocation events data logging.
- Define or edit the
EGO_DATA_ENABLE
parameter in theego.conf
file to disable data logging.EGO_DATA_ENABLE = N- In the command console, restart EGO on the master host to activate these changes.
egosh ego restart
master_host_name
- Stop the reporting services.
Stop the
derbydb
(if you are using the Derby demo database),jobdt
,plc
, andpurger
services.- Disable automatic startup of the
derbydb
(if you are using the Derby demo database),jobdt
,plc
, andpurger
services .Move to a Production Database
Move the reporting feature to a production database.
Prerequisites: The commercial database is properly configured and running:
- You have a user name, password, and URL to access the database.
- There is appropriate space in the database allocated for the reporting feature.
The Derby demo database is not supported for any production clusters. To produce regular reports for a production cluster, you must use a supported commercial database. The reporting feature supports Oracle 9i, Oracle 10g, and MySQL 5.x databases.
All data in the demo database will not be available in the production database. Some of your custom reports may not be compatible with the production database if you used non-standard SQL code.
- Create a database schema for your commercial database.
- If you are using an Oracle database, create a database schema as described in Create an Oracle database schema.
- If you are using a MySQL database, create a database schema as described in Create a MySQL database schema.
- Stop the reporting services.
Stop the
derbydb
(if you are using the Derby demo database),jobdt
,plc
, andpurger
services.- If you are using the Derby demo database, disable automatic startup of the
derbydb
service.- If you are in UNIX, copy the Oracle JDBC driver into the PERF and GUI library directories.
You need to copy the Oracle JDBC driver to the following directories:
$PERF_TOP/
version
/lib
LSF_TOP
/gui/
version
/tomcat/common/lib
- Configure your database connection.
- Restart the reporting services.
Restart the
jobdt
,plc
, andpurger
services.- If your cluster is EGO-enabled, restart the HPC Portal.
note:
The HPC Portal will be unavailable during this step.
- In the command console, stop the
WEBGUI
service.
egosh service stop WEBGUI
- Restart the
WEBGUI
service.
egosh service start WEBGUI
The report data will now be loaded into the production database and the Console will use the data in this database.
Create an Oracle database schema
Prerequisites: The Oracle database is properly configured and running:
- You have a user name, password, and URL to access the database.
- You installed the latest JDBC driver (
ojdbc14.jar
or newer) for the Oracle database. This driver is available from the following URL:- http://www.oracle.com/technology/software/tech/java/sqlj_jdbc/index.html
- In the command console, open the EGO database schema directory.
- UNIX:
cd $PERF_TOP/ego/
version
/DBschema/Oracle
- Windows:
cd %PERF_TOP%\ego\
version
\DBschema\Oracle
- Run the script to create the EGO database schema.
sqlplus
user_name
/
password
@
connect_string
@egodata.sql
data_tablespace
index_tablespace
where
user_name
is the user name on the database.password
is the password for this user name on the database.connect_string
is the named SQLNet connection for this database.data_tablespace
is the name of the tablespace where you intend to store the table schema.index_tablespace
is the name of the tablespace where you intend to store the index.In the command console, open the LSF database schema directory.
- UNIX:
cd $PERF_TOP/lsf/
version
/DBschema/Oracle
- Windows:
cd %PERF_TOP%\lsf\
version
\DBschema\Oracle
Run the script to create the LSF database schema. sqlplus
user_name
/
password
@
connect_string
@lsfdata.sql
data_tablespace
index_tablespace
where
user_name
is the user name on the database.password
is the password for this user name on the database.connect_string
is the named SQLNet connection for this database.data_tablespace
is the name of the tablespace where you intend to store the table schema.index_tablespace
is the name of the tablespace where you intend to store the index.Run the scripts to create a database schema for other installed packages. For example, to create a database schema for LSF desktop reports,
sqlplus
user_name
/
password
@
connect_string
@lsfdesktopreport.sql
data_tablespace
index_tablespace
Create a MySQL database schema
Prerequisites: The MySQL database is properly configured and running:
- You have a user name, password, and URL to access the database.
- You installed the latest JDBC driver (
mysql-connector-java-3.1.12-bin.jar
or newer) for the MySQL database. This driver is available from the following URL:http://dev.mysql.com/downloads/
- In the command console, open the EGO database schema directory.
- UNIX:
cd $PERF_TOP/ego/
version
/DBschema/MySQL
- Windows:
cd %PERF_TOP%\ego\
version
\DBschema\MySQL
- Run the scripts to create the EGO database schema.
mysql --user=
user_name
--password=
password
--database=
report_database
< egodata.sql
where
user_name
is the user name on the database.password
is the password for this user name on the database.report_database
is the name of the database to store the report data.In the command console, open the LSF database schema directory.
- UNIX:
cd $PERF_TOP/lsf/
version
/DBschema/MySQL
- Windows:
cd %PERF_TOP%\lsf\
version
\DBschema\MySQL
Run the scripts to create the LSF database schema. mysql --user=
user_name
--password=
password
--database=
report_database
< lsfdata.sql
where
user_name
is the user name on the database.password
is the password for this user name on the database.report_database
is the name of the database to store the report data.Run the scripts to create a database schema for other installed packages. For example, to create a database schema for LSF desktop reports,
mysql --user=
user_name
--password=
password
--database=
report_database
< lsfdesktopreport.sql
Configure the database connection
Prerequisites: You have a user name, password, and URL to access the database.
Launch the database configuration tool to configure your database connection.
- If you connected to the UNIX host via
telnet
and are runningxserver
on a local host, set your display environment.Test your display by running
xclock
or another X-Windows application.If the application displays, your display environment is already set correctly; otherwise, you need to set your display environment.
- For
csh
ortcsh
:setenv DISPLAY
hostname
:0.0
- For
sh
,ksh
, orbash
:DISPLAY=
hostname
:0.0
export DISPLAY
where
hostname
is your local host.- Launch the database configuration tool.
- In UNIX, run
$PERF_TOP/
version
/bin/dbconfig.sh
.- In Windows, run
%PERF_TOP%\
version
\bin\dbconfig
.- In the
User ID
andPassword
fields, specify the user account name and password with which to connect to the database and to create your database tablespaces.This user account must have been defined in your database application, and must have read and write access to the database tables.
- In the
JDBC driver
field, select the driver for your commercial database.- In the
JDBC URL
field, enter the URL for your database.This should be similar to the format given in
Example URL format
.- In the
Maximum connections
field, specify the maximum allowed number of concurrent connections to the database server.This is the maximum number of users who can produce reports at the same time.
Platform Computing Inc.
www.platform.com |
Knowledge Center Contents Previous Next Index |