Data Movement Utilities Guide and Reference

Appendix B. Differences Between the Import and the Load Utility

The following table summarizes the important differences between the DB2 load and import utilities.
Import Utility Load Utility
Slow when moving large amounts of data. Faster than the import utility when moving large amounts of data, because the load utility writes formatted pages directly into the database.
Limited exploitation of intra-partition parallelism. Exploitation of intra-partition parallelism. Typically, this requires symmetric multiprocessor (SMP) machines.
No FASTPARSE support. FASTPARSE support, providing reduced data checking of user-supplied data.
No CODEPAGE support. CODEPAGE support, converting character data (and numeric data specified in characters) from this code page to the database code page during the load operation.
Supports hierarchical data. Does not support hierarchical data.
Creation of tables, hierarchies, and indexes supported with PC/IXF format. Tables and indexes must exist.
No support for importing into summary tables. Support for loading into summary tables.
WSF format is supported. WSF format is not supported.
No BINARYNUMERICS support. BINARYNUMERICS support.
No PACKEDDECIMAL support. PACKEDDECIMAL support.
No ZONEDDECIMAL support. ZONEDDECIMAL support.
Cannot override columns defined as GENERATED ALWAYS. Can override GENERATED ALWAYS columns, by using the GENERATEDIGNORE and IDENTITYIGNORE file type modifiers.
Supports import into tables and views. Supports loading into tables only.
The table spaces in which the table and its indexes reside are online for the duration of the import operation. The table spaces in which the table and its indexes reside are offline for the duration of the load operation.
All rows are logged. Minimal logging is performed.
Trigger support. No trigger support.
If an import operation is interrupted, and a commitcount was specified, the table is usable and will contain the rows that were loaded up to the last COMMIT. The user can restart the import operation, or accept the table as is. If a load operation is interrupted, and a savecount was specified, the table remains in load pending state and cannot be used until the load operation is restarted, a load terminate operation is invoked, or until the table space is restored from a backup image created some time before the attempted load operation.
Space required is approximately equivalent to the size of the largest index plus 10%. This space is obtained from the temporary table spaces within the database. Space required is approximately equivalent to the sum of the size of all indexes defined on the table, and can be as much as twice this size. This space is obtained from temporary space within the database.
All constraints are validated during an import operation. Uniqueness is verified during a load operation, but all other constraints must be checked using the SET INTEGRITY statement.
The key values are inserted into the index one at a time during an import operation. The key values are sorted and the index is built after the data has been loaded.
If updated statistics are required, the runstats utility must be run after an import operation. Statistics can be gathered during the load operation if all the data in the table is being replaced.
You can import into a host database through DB2 Connect. You cannot load into a host database.
Import files must reside on the node from which the import utility is invoked. In a partitioned database environment, load files or pipes must reside on the node that contains the database. In a non-partitioned database environment, load files or pipes can reside on the node that contains the database, or on the remotely connected client from which the load utility is invoked.
A backup image is not required. Because the import utility uses SQL inserts, DB2 logs the activity, and no backups are required to recover these operations in case of failure. A backup image can be created during the load operation.


[ Top of Page | Previous Page | Next Page ]