![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
This chapter helps you estimate disk and memory requirements and tells you about database settings, including how to change settings. This chapter contains the following sections:
Note: | If you are migrating from an earlier version of Hyperion Essbase, see the Hyperion Essbase Start Here booklet for guidance in estimating space requirements. |
For information about specifying and changing Hyperion Essbase kernel settings, see Specifying Hyperion Essbase Kernel Settings.
This topic describes the units of storage that you need to know about in order to size a database. The discussion assumes that you are familiar with basic concepts of the Hyperion Essbase kernel. See Introducing the Hyperion Essbase Kernel to learn about the Hyperion Essbase kernel.
A Hyperion Essbase database consists of many different components. In addition to an outline file and a data file, Hyperion Essbase uses various files and memory structures to manage data storage, calculation, and retrieval operations. Table 15-1 describes the major components that you must consider when you estimate the disk and memory requirements of a database.
When you load data to a database, Hyperion Essbase creates and populates index and data files on the disk. When you access data, Hyperion Essbase executes an index search and loads index information and data into memory buffers called caches.
Hyperion Essbase loads index information from the disk into the index cache on demand. The index information points to data, which Hyperion Essbase retrieves from data files into the data file cache.
The calculator cache is an optional cache used for calculations. For information on the calculator cache, see Setting Memory Cache Sizes.
Hyperion Essbase requires disk space for basic server software and for each database. Each database comprises many files including the data files, the index files, the overhead files and the outline file. The following subtopics enable you to estimate the size of each of these database components. After you determine each individual component size, see Estimating Disk Space for a Database, to approximate the amount of disk space needed for an entire database.
This discussion begins with guidelines for calculating adequate disk space for a single database. To estimate overall disk space requirements, see Total Disk Requirement.
The total amount of disk space required for a database depends on the following factors:
Most multidimensional applications tend to be sparse, and the amount of space required is difficult to determine precisely. The following guidelines, in conjunction with information presented in the Database Information dialog box of Hyperion Essbase Application Manager, provide a rough estimate of disk space requirements.
Note: | The database sizing calculations that follow assume an ideal scenario with an optimum database design and unlimited disk space. |
The actual size of a data block depends on the amount of data that exists where dimensions intersect.
For example, the Sample Basic database contains the following:
Thus, the potential population of a data block in Sample Basic is:
12 x 8 x 2 = 192 data cells
192 data cells x 8 bytes = 1,536 bytes
The use of compression (whether it is used and, if it is used, what type is used) affects the actual disk space consumed by a data file. For more information about data compression, see Data Compression.
The following formula provides a rough estimate of compressed block size. It assumes that bitmap compression is enabled.
Compressed block size = Expanded block size x Block density
For example, assume an expanded block size of 1,536 bytes and a block density of 25%:
(1,536 bytes x .25) = 384 bytes
To determine block density, you can load the database and in Hyperion Essbase Application Manager, select Database > Information, select the Statistics tab, and look at the Block Density label. If you want to estimate block density prior to loading data, estimate the ratio of existing data values to potential data values.
Note: | Actual block density varies widely from block to block. The calculations in this discussion are only for estimation purposes. |
To estimate the number of data blocks, you must consider the potential number of data blocks and the density of the database.
When you determine the potential number of data blocks, you assume that there are data values for all combinations of stored members; for example, for Sample Basic, you assume that all products are sold in all markets. Stored members include neither attribute dimensions and their members nor members tagged as label only, shared, or Dynamic Calc.
To determine the potential, or maximum, number of blocks, multiply the number of stored members in each sparse dimension together.
For example, the Sample Basic database contains the following:
Thus, the Sample Basic database has:
19 x 25 = 475 potential data blocks.
Product and Market are sparse dimensions because not every product is sold in every market. Therefore, the actual number of blocks is far less than the potential number of data blocks.
If the database is already loaded, you can see the existing number of blocks and the potential number of blocks on the Statistics page of the Database Information dialog box of Hyperion Essbase Application Manager. Instead of estimating a number, you can use the existing number of data blocks in later calculations.
It is very difficult to determine the size of a database without loading the database. If you must determine size before loading the database, estimate the percentage of cells that contain data values and multiply this percentage against the potential number of data blocks. The percentage can vary based on the sparsity or density of the data blocks. This guideline provides only a rough estimate.
The following examples assume 100,000,000 potential data blocks:
.05 (estimated percentage) x 100,000,000 (potential data blocks) = 5,000,000 data blocks
.15 (estimated percentage) x 100,000,000 (potential data blocks) = 15,000,000 data blocks
.50 (estimated percentage) x 100,000,000 (potential data blocks) = 50,000,000 data blocks
To estimate the space required to store the data files (ESSxxxxx.PAG
), multiply the compressed block size by the number of blocks. For this example and the remaining examples, consider the number of data blocks from the previous example calculated at a density of 15%.
384 (compressed block size) x 15,000,000 (number of data blocks) = 5,760,000,000 bytes
Note: | If compression is not used, substitute the expanded block size for the compressed block size in this formula. |
The following formula helps you estimate the total size of the index, including all index files. The formula provides only a rough estimation.
Number of blocks x 112 bytes
This formula is based on the size of an index entry, which is 112 bytes.
For example, assuming a database with 15,000,000 blocks, you should allow about 1,680,000,000 bytes for the entire Hyperion Essbase index:
15,000,000 bytes x 112 = 1,680,000,000 bytes
If the database is loaded, select Database > Information and look at the Files tab to determine the actual size of the index.
The following subtopics show you how to calculate fixed-size overhead. You should use one of two methods of calculation, depending on whether the database uses bitmap compression, run-length encoding (RLE), or no compression.
Begin the calculation of fixed-size overhead for a database that uses bitmap compression by using this formula:
((Expanded block size in bytes / 64) + 72) x Number of blocks
The formula may be explained as follows:
Then, round the value produced by the formula up to the nearest multiple of eight.
For example, assume the expanded block size is 4,802 bytes and that there are 15,000,000 blocks.
(4,802
/
64)
+ 72=
147.03 bytes
Finally, calculate the fixed-size overhead for the entire database by multiplying the overhead per block by the number of blocks:
Bitmap overhead using bitmap compression =
152 x 15,000,000 bytes = 2,280,000,000 bytes
To calculate the fixed-size overhead for a database that uses RLE or no compression, use the total amount of overhead per block, 72 bytes per block.
This is the total amount of fixed overhead per block.
To calculate the fixed-size overhead for the entire database, multiply the overhead per block by the number of blocks.
Assuming 15,000,000 blocks, bitmap overhead using RLE or no compression =
72 x 15,000,000 bytes = 1,080,000,000 bytes
If you are using bitmap or RLE compression, a certain amount of fragmentation occurs. The amount of fragmentation is based on individual database and operating system configurations and cannot be precisely predicted.
As a rough estimate, calculate 20% of the compressed database size. For example, a compressed database size of 5,769,000,000 bytes produces the following calculation:
5,769,000,000 bytes (database size) x .2 = 1,152,000,000 bytes
For recovery purposes, Hyperion Essbase maintains a data recovery area on the disk. The size of this area increases until the database is restructured. To calculate the size of the data recovery area, multiply by 2 the sum of the sizes of the index, the overhead, the compressed data blocks, and the fragmentation allowance.
(Index size + Fixed size overhead + Size of compressed data blocks + Fragmentation allowance) x 2
Factors that affect the size of an outline include the following:
First, estimate the main area of the outline by multiplying the number of members by an estimated number of bytes between 350 and 450. If the database includes few aliases or very short aliases and short member names, use a smaller number within the range. If you know that the names or aliases are very long, use a larger number within the range. (The maximum size for a member name and for an alias is 80 characters.) The following example uses a number in the middle of the range and assumes the outline has 26,000 members. This formula provides only a rough estimate.
400 bytes x 26,000 members = 10,400,000 bytes
Then, if the outline includes attribute dimensions, you must also calculate an area that stores attribute association information. Calculate the size of this area for each base dimension and add the sum of these areas to the basic outline size you already estimated.
To calculate the attribute area in an outline for a base dimension in bytes, multiply the number of members of the base dimension by the sum of the number of members of all attribute dimensions associated with the base dimension, and then divide by 8.
For example, assume that the base dimension Product (with 23,000 members) has two attribute dimensions associated with it: Ounces (with 20 members) and Pkg Type (with 50 members). The calculation for the attribute area in the outline for the Product dimension is:
(23,000 x (20 + 50)) bits / 8 bits/byte = 201,250 bytes
Assume another base dimension, Market (with 2,500 members), is associated with the 12-member attribute dimension, Population. The attribute area in the outline for the Market dimension is equal to (2,500 x 12) / 8, or 3,750 bytes.
The total estimated size of the outline is equal to the sum of 10,400,000 bytes
+ 201,250 bytes + 3,750 bytes which is 10,605,000 bytes.
During restructuring, Hyperion Essbase uses a restructuring work area on the disk.
During migration from prior releases of Hyperion Essbase, for recovery purposes, Hyperion Essbase creates a copy of the database in a migration work area.
To create these temporary work areas, Hyperion Essbase may require disk space equal to the size of the entire database, including the outline. Because migration and restructuring do not occur at the same time, a single allocation can represent both requirements.
After you estimate the size of each database component, you are ready to calculate an estimate of disk space that you need for the database.
The process for determining the total disk space needed for a database is shown in Table 15-2. It is assumed that the database does not contain linked reporting objects and that bitmap compression is enabled.
If you are using replicated partitions, you need to allocate enough disk space at the target database to hold double the total size of all replicated partitions. You need to allocate double the size of all replicated partitions because, to aid in seamless recovery, Hyperion Essbase temporarily retains duplicate blocks before committing transactions.
When you use any type of partition, Hyperion Essbase stores certain data (such as connection and time stamp information) at both the source and target databases. However, the amount of storage Hyperion Essbase uses for these items is insignificant.
For information about Hyperion Essbase Partitioning, see Designing Partitioned Applications.
You can use the Linked Reporting Objects (LRO) to associate objects, such as flat files, with data cells.
If the linked object associated with a cell is a flat file, Hyperion Essbase copies the file to the server. Therefore, you need to know the combined size of all flat files you are using as linked reporting objects.
You can set a limit on the size of a linked object, if the linked object is a file (as opposed to a cell note). In Hyperion Essbase Application Manager, select Application > Settings and enter the desired limit in the Max. Attachment File Size text box.
![]() |
You can also use alter application in MaxL or the SETAPPSTATE command in ESSCMD to limit the size of a linked object. See the online Technical Reference in the DOCS directory for information.
|
The Hyperion Essbase kernel stores information about linked reporting objects in a Linked Reporting Objects catalog. A catalog entry is stored as an index page. For every catalog entry, Hyperion Essbase uses 8 KB if direct I/O is used; if buffered I/O is used, Hyperion Essbase uses the index page size. For every cell note, Hyperion Essbase uses 600 bytes. The cell note is stored in the catalog.
For more information, see Linking Objects to Hyperion Essbase Data.
The previous calculations in this chapter estimate the data storage requirement for a single database. Often, more than one database resides on the server.
The total Hyperion Essbase data storage requirement on a server includes the Hyperion Essbase software and the sum of the requirements of all the databases to be stored on the server.
Allow 35 to 50 MB for the base installation of Hyperion Essbase server software and sample applications. The allowance varies by platform. For details, see the Hyperion Essbase Installation Guide.
The minimum memory requirement for running Hyperion Essbase is 64 megabytes (MB) per database. On UNIX systems, the minimum requirement is 128 MB per database. Based on the number of databases and the database operations on the server, the amount of memory you require may be more.
Hyperion Essbase uses buffers called caches to hold data and index information in memory while database operations are being performed. To optimize your database performance, Hyperion Essbase enables you to define cache settings and sizes.
This topic provides guidelines for determining the cache settings and sizing your memory requirements. It is recommended that you load your databases, define the cache settings based on the guidelines that follow, and run the basic calculation and retrieval operations for each database. Depending on the amount of memory that is available, you may need to fine tune the cache settings to optimize performance.
The following subtopics enable you to estimate the size of each of the individual database components that require memory resource. After you determine the individual component sizes, to approximate the total memory requirement, see Estimating Total Memory Requirements. These guidelines assume one database per application.
At startup of a database, Hyperion Essbase sets aside memory for the following components:
How much additional memory is required for normal operation of a database, after it is started, depends on the following factors:
In addition, the following database settings affect memory usage:
Cache size settings can have a significant impact on database and general server performance. For general performance information regarding cache sizes, see Introducing the Hyperion Essbase Kernel.
This topic provides information about setting up your cache sizes with recommendations for ideal starting sizes to optimize performance. The exact setting you should use for each cache also depends on data distribution and the dense/sparse configuration of the database. For definitions of index cache, index page, data file cache, and data cache, see Understanding How Hyperion Essbase Stores Data.
. If your memory resource is restricted, you can optimize your performance by adjusting your settings.
Note: | If you are migrating from a previous release of Hyperion Essbase, see the Hyperion Essbase Start Here booklet for important information on migrating existing cache settings. |
Before setting cache sizes, you need to determine whether or not you want to enable cache memory locking. The cache memory locking setting locks the memory used for the index cache, data file cache, and data cache into physical memory.
If you want to give the Hyperion Essbase kernel priority usage of the physical memory, enable cache memory locking. If you enable cache memory locking, leave sufficient physical memory available for non-Hyperion Essbase kernel use. If you do not want to give the Hyperion Essbase kernel priority usage of the physical memory, disable cache memory locking. By default, cache memory locking is turned off.
For information about how to enable cache memory locking, see Enabling Cache Memory Locking.
If you are using Hyperion Essbase for the first time, cache sizes are automatically set to the default values shown in Table 15-3. If you are migrating from a previous version of Hyperion Essbase, the new data file cache is set to the default value and the other cache settings from that version are retained when you migrate. See the Hyperion Essbase Start Here booklet for migration information.
To help you estimate cache memory requirements, Table 15-3 shows default and recommended cache settings. These caches are listed in priority order from highest priority to lowest priority. For example, if memory resources are restricted, allocating adequate memory to the index cache is top priority, followed by the index page, the data file cache, and so on.
The needs for each site and even for a particular database can vary. Depending on the complexity and type of each operation, Hyperion Essbase allocates as much memory for data file cache and data cache as needed. Use the recommended values in this table to estimate enough memory for optimal performance.
Cache- Related Settings in priority order |
Minimum Value |
Default Value | Recommended Value | ||
---|---|---|---|---|---|
Index Cache Size |
1 MB (1048576 bytes) | 10 MB (10485760 bytes) |
Combined size of all ESS*.IND files, if possible; as large as possible otherwise. Do not set this cache size higher than the total index size, as no performance improvement results. (To determine the total index size see Estimating the Size of the Index.) | ||
Index Page Size |
1 KB (1024 bytes) | ||||
Data File Cache Size |
8 MB (8388608 bytes) | 32 MB (33554432 bytes) | Combined size of all ESS*.PAG files, if possible; as large as possible otherwise. | ||
Data Cache Size |
3 MB (3145728 bytes) |
3 MB (3145728 bytes) |
0.125 * data file cache size value. Allocate more if:
| ||
Cache Memory Locking | Not applicable | Disabled | To give the Hyperion Essbase kernel priority usage of system RAM, enable cache memory locking. Otherwise, disable it. |
For instructions on setting index cache, index page, data file cache, and data cache sizes and on defining cache memory locking settings, see Specifying and Changing Hyperion Essbase Kernel Settings.
An additional cache, the calculator cache, is used only during calculations. Consider the calculator cache memory requirements when you size memory requirements for database operations (see Estimating Additional Memory Requirements for Calculations).
After using the database under your installation's usual conditions, check to see how Hyperion Essbase is performing. You may need to adjust the settings.
You can check Hyperion Essbase performance in several ways. Using Diagnostics to Monitor Performance provides more information, but here is an example:
Check the Run-time page of the Database Information dialog box (select Database > Information from the Hyperion Essbase Application Manager menu). In particular, check the hit ratios:
A higher hit ratio on the index and data file caches indicates better performance; 100 is the highest possible value.
The sizes of the index cache and the data file cache are the most critical Hyperion Essbase cache settings. In general, the larger these buffers, the less swapping activity occurs; however, it does not always help performance to set cache sizes larger and larger. Read this entire section to understand the cache size considerations.
The advantages of a large index cache start to level off after a certain point. At any given time, when the index cache size equals or exceeds the index size (including all index files on all volumes), performance does not improve. However, to account for future growth of the index, you can set the index cache size larger than the current index size. See Estimating the Size of the Index for an example of estimating index size.
Because the index cache is filled with index pages, for optimum use of storage, set the size of the index cache to be a multiple of the size of the index page.
If possible, set the data file cache to equal the size of the stored data, which is the combined size of all ESS*.PAG
files. Otherwise, the data file cache should be as large as possible. If you want to account for future growth of stored data, you can set the data file cache size larger than the current size of stored data.
The data cache should be about 0.125 times the data file cache. However, certain calculations require a larger data cache size. If many concurrent users are accessing different data blocks, this cache should be larger.
In general, if you have to choose between allocating memory to the data file cache or allocating it to the data cache, choose the data file cache. If you are migrating from a previous version of Hyperion Essbase, see the Hyperion Essbase Start Here booklet for important migration information.
In addition to the memory used by the caches, Hyperion Essbase uses some additional memory while it works with the caches. To calculate these areas, use the formulas in Table 15-4.
To determine the potential stored block size, see Determining the Expanded Block Size of Stored Data.
To determine the # of server threads initially allocated to the Hyperion Essbase server process, see the SERVERTHREADS setting in the online Technical Reference in the DOCS
directory
At application startup time, Hyperion Essbase sets aside an area of memory based on the following factors:
The potential number of threads available is based on the number of licensed ports. The actual number of available threads depends on settings you define for the Agent or the server. See Multithreading.
Count all members except Label Only members, shared members, and members of attribute dimensions). For example, the number of members to use from Sample Basic would be 17 (Year) + 14 (Measures) + 4 (Scenario) + 19 (Product) + 25 (Market) members, which adds up to 79 members.
To determine the logical cell count, multiply together all members of each dense dimension (including Dynamic Calc members but excluding Label Only and shared members). For example, the dense dimensions in Sample Basic contain 17 (Year), 14 (Measures), and 4 (Scenario) members. The calculation for the cell count of a logical block in Sample Basic is:
17 * 14 * 4 = 952 cells
Use the following formula to calculate the data structure area in memory
Number of threads * ((Number of members in the outline * 26 bytes) + (Logical block cell count * 36 bytes))
Using Sample Basic as an example, assuming 20 threads, the startup area in memory required for data structures would be calculated as follows:
20 threads * ((79 members * 26 bytes) + (952 cells * 36 bytes)) = 726,520 bytes
In addition to startup memory requirements and cache requirements, other operations such as queries and calculations require additional memory. To estimate the amount of memory you need, you must also evaluate the memory used during these operations.
Hyperion Essbase processes requests for database information (queries) from a variety of sources. For example, it processes queries from the Hyperion Essbase Spreadsheet Add-in and from Hyperion Essbase Report Writer. Hyperion Essbase uses additional memory when it retrieves the data for these queries, especially when it must perform dynamic calculations to retrieve the data. This section describes Hyperion Essbase's memory requirements for query processing.
Hyperion Essbase is a multithreaded application in which queries are assigned to threads. Threads are created when Hyperion Essbase is started. In general, once a thread is created, it exists until you shut down the Hyperion Essbase server (for more information, see Running Hyperion Essbase, Applications, and Databases).
As Hyperion Essbase processes queries, it cycles through the available threads. For example, assume 20 threads are available at startup. As each query is processed, Hyperion Essbase assigns each succeeding query to the next sequential thread. After it has assigned the 20th thread, Hyperion Essbase cycles back to the beginning, assigning the 21st query to the first thread.
While processing a query, a thread allocates some memory and then releases most of it when the query is completed. Some of the memory is released to the operating system and some of it is released to the dynamic calculator cache for the database being used. However, the thread retains a portion of the memory for possible use in processing subsequent queries. As a result, after a thread has processed its first query, the memory held by the thread is greater than it was when Hyperion Essbase first started.
The maximum amount of memory required for query processing occurs in the following situation:
In the example where 20 threads are available at startup, the maximum amount of memory used for queries occurs when at least 20 queries have been processed and the maximum number of simultaneous queries are in process.
This topic describes a method of estimating query memory requirements by observing the memory used during queries. To calculate the maximum possible use of memory for query processing, summarize the memory used by simultaneous queries plus the extra memory acquired by threads that are waiting for queries.
The following sections define variables and tell you how to determine values for the variables. You then use the variables in formulas to calculate the amount of memory needed for retrieval operations. The final formula uses the following variables:
Determining the Total Number of Threads
The potential number of threads available is based on the number of licensed ports. The actual number of available threads depends on settings you define for the Agent or the server. See Multithreading. Use the number of threads on the system as the value for Total#Threads in Determining the Maximum Memory Used for A Query.
Estimating the Maximum Number of Concurrent Queries
Based on your knowledge of your company and the requirements for each database, determine the maximum number of concurrent queries and use this value for Max#ConcQueries in Determining the Maximum Memory Used for A Query. This value cannot exceed the value for Total#Threads.
Determining the Maximum Memory Used for A Query
The memory usage of individual queries depends on the size of each query and the number of data blocks that Hyperion Essbase needs to access in order to process each query. To estimate the memory used in a query, calculate the additional memory Hyperion Essbase uses while processing and after processing each query.
Choose several queries that you expect to use the most memory. Consider queries that must process large numbers of members; for example, queries that perform range or rank processing.
DOCS
directory.
Turning off the dynamic calculator cache enables measurement of memory still held by a thread by ensuring that, after the query is complete, the memory used for blocks during dynamic calculations is released by the ESSSVR process to the operating system.
Use this value for MemBeforeP.
Use this value for MemDuringP.
Determining the Maximum Memory Used During and After Query Processing Across All Queries
After you have completed the above calculations for all relevant queries, compare all results to determine the following two values:
Determining the Maximum Memory Needed by All Concurrent Queries
The final formula calculates the maximum amount of memory all users will need for query processing. Insert the appropriate values from the steps you have just completed into the formula in the following statement:
The amount of additional memory required for data retrievals will not exceed:
Max#ConcQueries*MAXAdditionalMemDuringP + (Total#Threads - Max#ConcQueries)*MAXAdditionalMemAfterP.
Because this calculation assumes that all of the concurrent queries are maximum-sized queries, the result may exceed your actual requirement. It is difficult to estimate the actual types of queries that will be run concurrently.
To adjust the memory used during queries, you can set values for the retrieval buffer, the retrieval sort buffer, and the dynamic calculator cache. For information, see Setting the Retrieval Buffer Size, Setting the Retrieval Sort Buffer Size, and Sizing Dynamic Calculator Caches. You can also adjust the CALCLOCKBLOCK setting in the configuration file or use SET LOCKBLOCK to select a different setting. For further information, see the online Technical Reference in the DOCS
directory.
If you cannot test actual queries, you can use the following formula to calculate a very rough estimate of memory needed for concurrent queries:
Memory for retrievals = Max#ConcQueries * (RetrBufSize + SortBufSize + DCCompSize)
Max#ConcQueries is the maximum number of possible concurrent queries. See Estimating the Maximum Number of Concurrent Queries.
RetrBufSize is the size of the retrieval buffer. See Setting the Retrieval Buffer Size.
SortBufSize is the size of the retrieval sort buffer. Include this value if sorting is involved in any query. See Setting the Retrieval Sort Buffer Size.
DCCompSize is the amount of memory Hyperion Essbase uses to calculate dynamically calculated values. The amount of memory used is limited by the SET LOCKBLOCK command. (See the online Technical Reference in the DOCS
directory for information about this command.) Include this value only if any query involves dynamic calculations.
To determine the logical block size, first multiply together all members of each dense dimension (including Dynamic Calc members but excluding Label Only and shared members). This yields the number of cells in a logical block. Then, multiply the number of cells by 8 bytes. For example, the dense dimensions in Sample Basic contain 17 (Year), 14 (Measures), and 4 (Scenario) members. The calculation for the logical block size of Sample Basic is:
(17 * 14 * 4) * 8 bytes = 7616 bytes
If SET LOCKBLOCK specifies 100 blocks, the value DCCompSize equals:
100 blocks * 7616 bytes = 761,600 bytes
Continuing with this example, to estimate the maximum memory needed for concurrent queries, assume the following values:
Memory for retrievals = Max#ConcQueries * (RetrBufSize + SortBufSize + DCCompSize)
= 20 * (10,240 bytes + 10,240 bytes + 761,600 bytes) = 15,641,600 bytes
For existing calc scripts, you can use the memory monitoring tools provided for the operating system on the server to observe memory usage. Run the most complex calculation and take note of the memory used both before running the calculation and while running the calculation. Calculate the difference and use that figure as the additional memory requirement for the calc script.
To understand calculation performance, see Optimizing Calculations
. If you cannot perform a test with a calc script, you can calculate a very rough estimate for the operational requirement of a calculation by summarizing the following values:
Note: | The size and complexity of the calc scripts that you run affects the memory requirement. How this affects the amount of memory required is difficult to estimate. |
For the total calculation requirement, summarize the amount of memory needed for all calculations that will be run simultaneously,
To estimate the total memory required for a server, you must consider the following factors:
The basic formula for estimating the total start-up memory needed per database is shown in Table 15-5.
| Component | Size (in MB) |
---|---|---|
+ | Code and static data | Approximately 10 MB |
+ | (Optional) Java Virtual Machine (The size depends on the operating system.) | Between 2 MB and 4 MB |
+ | Size of outline | (See Estimating the Size of the Outline) |
+ | Index cache size | (See Sizing Your Index, Data File, and Data Caches) |
+ | Index cache-related overhead | (See Estimating Cache Sizes) |
+ | Other cache-related overhead used at startup | (See Estimating Cache Sizes) |
+ | The startup memory area for data structures | (See Sizing the Startup Memory Area for Data Structures) |
= | Total database memory used at startup |
Additional operational requirements for memory may include memory used during the following operations:
The maximum operational requirements for queries | (See Estimating Memory Requirements for Database Operations) |
The maximum operational requirement for calculations | (See Estimating Additional Memory Requirements for Calculations) |
This is the total additional operational memory requirement.
This is an estimate of the memory needed for the Hyperion Essbase databases on the server.
If cache memory locking is enabled, the total memory requirement should not exceed two-thirds of available RAM; otherwise, system performance is severely degraded. If cache memory locking is disabled, the total memory requirement should not exceed available RAM.
If there is insufficient memory available, you can redefine your cache settings and recalculate the memory requirements. This can be an iterative process. For guidelines, see Fine-Tuning Your Cache Settings. In some cases, you may need to purchase additional RAM.
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Copyright © 1991-2000 Hyperion Solutions Corporation. All rights reserved.