The numeric data types are binary integer, floating-point, and decimal. Binary integer includes small integer, large integer, and big integer. Floating-point includes single precision and double precision. Binary numbers are exact representations of integers. Decimal numbers are exact representations of real numbers. Binary and decimal numbers are considered exact numeric types. Floating-point numbers are approximations of real numbers and are considered approximate numeric types.
All numbers have a sign, a precision, and a scale. If a column value is zero, the sign is positive. The precision is the total number of binary or decimal digits excluding the sign. The scale is the total number of binary or decimal digits to the right of the decimal point. If there is no decimal point, the scale is zero.
A small integer is a binary number composed of 2 bytes with a precision of 5 digits. The range of small integers is -32768 to +32767.
For small integers, decimal precision and scale are supported by COBOL, RPG, and iSeries system files. For information concerning the precision and scale of binary integers, see the DDS Reference topic.
A large integer is a binary number composed of 4 bytes with a precision of 10 digits. The range of large integers is -2147483648 to +2147483647.
For large integers, decimal precision and scale are supported by COBOL, RPG, and iSeries system files. For information concerning the precision and scale of binary integers, see the DDS Reference topic.
A big integer is a binary number composed of 8 bytes with a precision of 19 digits. The range of big integers is -9223372036854775808 to +9223372036854775807.
A single-precision floating-point number is a 32-bit approximate representation of a real number. The range of magnitude is approximately 1.17549436 × 10-38 to 3.40282356 × 1038.
A double-precision floating-point number is a IEEE 64-bit approximate representation of a real number. The range of magnitude is approximately 2.2250738585072014 × 10-308 to 1.7976931348623158 × 10308.
See Table 78 for more information.
A decimal value is a packed decimal or zoned decimal number with an implicit decimal point. The position of the decimal point is determined by the precision and the scale of the number. The scale, which is the number of digits in the fractional part of the number, cannot be negative or greater than the precision. The maximum precision is 63 digits.
All values of a decimal column have the same precision and scale. The range of a decimal variable or the numbers in a decimal column is -n to +n, where the absolute value of n is the largest number that can be represented with the applicable precision and scale.
The maximum range is negative 1063+1 to 1063 minus 1.
Small and large binary integer variables can be used in all host languages. Big integer variables can only be used in C, C++, ILE COBOL, and ILE RPG. Floating-point variables can be used in all host languages except RPG/400(R) and COBOL/400(R). Decimal variables can be used in all supported host languages.
When a decimal or floating-point number is cast to a string (for example, using a CAST specification) the implicit decimal point is replaced by the default decimal separator character in effect when the statement was prepared. When a string is cast to a decimal or floating-point value (for example, using a CAST specification), the default decimal separator character in effect when the statement was prepared is used to interpret the string.
(C) Copyright IBM Corporation 1992, 2006. All Rights Reserved.