Estimating the size of database objects is an imprecise undertaking. Overhead caused by disk fragmentation, free space, and the use of variable length columns makes size estimation difficult, because there is such a wide range of possibilities for column types and row lengths. After initially estimating your database size, create a test database and populate it with representative data.
From the Control Center, you can access a number of utilities that are designed to assist you in determining the size requirements of various database objects:
In each of these cases, either the "Show SQL" or the "Show Command" button is available to you. You can also save the resulting SQL statements or commands in script files to be used later. All of these utilities have online help to assist you.
Keep these utilities in mind as you work through the planning of your physical database requirements.
When estimating the size of a database, the contribution of the following must be considered:
Space requirements related to the following are not discussed:
System catalog tables are created when a database is created. The system tables grow as database objects and privileges are added to the database. Initially, they use approximately 3.5 MB of disk space.
The amount of space allocated for the catalog tables depends on the type of table space, and the extent size of the table space containing the catalog tables. For example, if a DMS table space with an extent size of 32 is used, the catalog table space will initially be allocated 20 MB of space. For more information, see Designing and Choosing Table Spaces.
Note: | For databases with multiple partitions, the catalog tables reside only on the partition from which the CREATE DATABASE command was issued. Disk space for the catalog tables is only required for that partition. |
By default, table data is stored on 4 KB pages. Each page (regardless of page size) contains 76 bytes of overhead for the database manager. This leaves 4020 bytes to hold user data (or rows), although no row on a 4 KB page can exceed 4005 bytes in length. A row will not span multiple pages. You can have a maximum of 500 columns when using a 4 KB page size.
Table data pages do not contain the data for columns defined with LONG VARCHAR, LONG VARGRAPHIC, BLOB, CLOB, or DBCLOB data types. The rows in a table data page do, however, contain a descriptor for these columns. (See Long Field Data and Large Object (LOB) Data for information about estimating the space requirements for table objects that do contain these data types.)
Rows are usually inserted into a table in first-fit order. The file is searched (using a free space map) for the first available space that is large enough to hold the new row. When a row is updated, it is updated in place, unless there is insufficient space left on the page to contain it. If this is the case, a record is created in the original row location that points to the new location in the table file of the updated row.
If the ALTER TABLE APPEND ON statement is invoked, data is always appended, and information about any free space on the data pages is not kept. For more information about this statement, refer to the SQL Reference.
The number of 4 KB pages for each user table in the database can be estimated by calculating:
ROUND DOWN(4020/(average row size + 10)) = records_per_page
and then inserting the result into:
(number_of_records/records_per_page) * 1.1 = number_of_pages
where the average row size is the sum of the average column sizes, (For information about the size of each column, refer to the CREATE TABLE statement in the SQL Reference.), and the factor of "1.1" is for overhead.
Note: | This formula only provides an estimate. Accuracy of the estimate is reduced if the record length varies because of fragmentation and overflow records. |
You also have the option to create buffer pools or table spaces that have an 8 KB, 16 KB, or 32 KB page size. All tables created within a table space of a particular size have a matching page size. A single table or index object can be as large as 512 GB, assuming a 32 KB page size. You can have a maximum of 1012 columns when using an 8 KB, 16 KB, or 32 KB page size. The maximum number of columns is 500 for a 4 KB page size. Maximum row lengths also vary, depending on page size:
Having a larger page size facilitates a reduction in the number of levels in any index. If you are working with OLTP (online transaction processing) applications, which perform random row reads and writes, a smaller page size is better, because it wastes less buffer space with undesired rows. If you are working with DSS (decision support system) applications, which access large numbers of consecutive rows at a time, a larger page size is better, because it reduces the number of I/O requests required to read a specific number of rows. An exception occurs when the row size is smaller than the page size divided by 255. In such a case, there is wasted space on each page. (Recall that there can be a maximum of only 255 rows per page.) To reduce this wasted space, a smaller page size may be more appropriate.
You cannot restore a backup to a different page size.
You cannot import IXF data files that represent more than 755 columns. For more information about importing data into tables, and IXF data files, refer to the Data Movement Utilities Guide and Reference.
Declared temporary tables can only be created in their own "user temporary" table space type. There is no default user temporary table space. Temporary tables cannot have LONG data. The tables are dropped implicitly when an application disconnects from the database, and estimates of their space requirements should take this into account.
Long field data is stored in a separate table object that is structured differently from other data types (see User Table Data and Large Object (LOB) Data).
Data is stored in 32 KB areas that are broken up into segments whose sizes are "powers of two" times 512 bytes. (Hence these segments can be 512 bytes, 1024 bytes, 2048 bytes, and so on, up to 32 700 bytes.)
Long field data types (LONG VARCHAR or LONG VARGRAPHIC) are stored in a way that enables free space to be reclaimed easily. Allocation and free space information is stored in 4 KB allocation pages, which appear infrequently throughout the object.
The amount of unused space in the object depends on the size of the long field data, and whether this size is relatively constant across all occurrences of the data. For data entries larger than 255 bytes, this unused space can be up to 50 percent of the size of the long field data.
If character data is less than the page size, and it fits into the record along with the rest of the data, the CHAR, GRAPHIC, VARCHAR, or VARGRAPHIC data types should be used instead of LONG VARCHAR or LONG VARGRAPHIC.
Large object (LOB) data is stored in two separate table objects that are structured differently from other data types (see User Table Data and Long Field Data).
To estimate the space required by LOB data, you need to consider the two table objects used to store data defined with these data types:
Data is stored in 64 MB areas that are broken up into segments whose sizes are "powers of two" times 1024 bytes. (Hence these segments can be 1024 bytes, 2048 bytes, 4096 bytes, and so on, up to 64 MB.)
To reduce the amount of disk space used by LOB data, you can specify the COMPACT option on the lob-options clause of the CREATE TABLE and the ALTER TABLE statements. The COMPACT option minimizes the amount of disk space required by allowing the LOB data to be split into smaller segments. This process does not involve data compression, but simply uses the minimum amount of space, to the nearest 1 KB boundary. Using the COMPACT option may result in reduced performance when appending to LOB values.
The amount of free space contained in LOB data objects is influenced by the amount of update and delete activity, as well as the size of the LOB values being inserted.
Allocation and free space information is stored in 4 KB allocation pages that are separated from the actual data. The number of these 4 KB pages is dependent on the amount of data, including unused space, allocated for the large object data. The overhead is calculated as follows: one 4 KB page for every 64 GB, plus one 4 KB page for every 8 MB.
If character data is less than the page size, and it fits into the record along with the rest of the data, the CHAR, GRAPHIC, VARCHAR, or VARGRAPHIC data types should be used instead of BLOB, CLOB, or DBCLOB.
For each index, the space needed can be estimated as:
(average index key size + 8) * number of rows * 2
where:
Note: | For every column that allows NULLs, add one extra byte for the null indicator. |
Temporary space is required when creating the index. The maximum amount of temporary space required during index creation can be estimated as:
(average index key size + 8) * number of rows * 3.2
where the factor of "3.2" is for index overhead, and space required for sorting during index creation.
Note: | In the case of non-unique indexes, only four bytes are required to store duplicate key entries. The estimates shown above assume no duplicates. The space required to store an index may be over-estimated by the formula shown above. |
The following two formulas can be used to estimate the number of leaf pages (the second provides a more accurate estimate). The accuracy of these estimates depends largely on how well the averages reflect the actual data.
Note: | For SMS, the minimum required space is 12 KB. For DMS, the minimum is an extent. |
(.9 * (U - (M*2))) * (D + 1) ---------------------------- K + 6 + (4 * D)where:
Remember that minimumKeySize and averageKeysize must have an extra byte for each nullable key part, and an extra byte for the length of each variable length key part.
If there are include columns, they should be accounted for in minimumKeySize and averageKeySize.
The .9 can be replaced by any (100 - pctfree)/100 value, if a percent free value other than the default value of ten percent was specified during index creation.
L = number of leaf pages = X / (avg number of keys on leaf page)where X is the total number of rows in the table.
You can estimate the original size of an index as:
(L + 2L/(average number of keys on leaf page)) * pagesize
For DMS table spaces, add together the sizes of all indexes on a table, and round up to a multiple of the extent size for the table space on which the index resides.
You should provide additional space for index growth due to INSERT/UPDATE activity, which may result in page splits.
Use the following calculations to obtain a more accurate estimate of the original index size, as well as an estimate of the number of levels in the index. (This may be of particular interest if include columns are being used in the index definition.) The average number of keys per non-leaf page is roughly:
(.9 * (U - (M*2))) * (D + 1) ---------------------------- K + 12 + (8 * D)
where:
The minimumKeySize and the averageKeySize for non-leaf pages will be the same as for leaf pages, except when there are include columns. Include columns are not stored on non-leaf pages.
You should not replace .9 with (100 - pctfree)/100, unless this value is greater than .9, because a maximum of 10 percent free space will be left on non-leaf pages during index creation.
The number of non-leaf pages can be estimated as follows:
if L > 1 then {P++; Z++} While (Y > 1) { P = P + Y Y = Y / N Z++ }
where:
Total number of pages is:
T = (L + P + 2) * 1.0002
The additional 0.02 percent is for overhead, including space map pages.
The amount of space required to create the index is estimated as:
T * pagesize