Administrative API Reference

sqluimpr - Import

Inserts data from an external file with a supported file format into a table, hierarchy, or view. A faster alternative is sqluload - Load; however, the load utility does not support loading data at the hierarchy level.

Authorization

Required Connection

Database. If implicit connect is enabled, a connection to the default database is established.

Version

sqlutil.h

C API Syntax



/* File: sqlutil.h */
/* API: Import */
/* ... */
SQL_API_RC SQL_API_FN
  sqluimpr (
    char * pDataFileName,
    sqlu_media_list * pLobPathList,
    struct sqldcol * pDataDescriptor,
    struct sqlchar * pActionString,
    char * pFileType,
    struct sqlchar * pFileTypeMod,
    char * pMsgFileName,
    short CallerAction,
    struct sqluimpt_in*  pImportInfoIn,
    struct sqluimpt_out*  pImportInfoOut,
    sqlint32 * pNullIndicators,
    void * pReserved,
    struct sqlca * pSqlca);
/* ... */
 

Generic API Syntax



/* File: sqlutil.h */
/* API: Import */
/* ... */
SQL_API_RC SQL_API_FN
  sqlgimpr (
    unsigned short DataFileNameLen,
    unsigned short FileTypeLen,
    unsigned short MsgFileNameLen,
    char * pDataFileName,
    sqlu_media_list * pLobPathList,
    struct sqldcol * pDataDescriptor,
    struct sqlchar * pActionString,
    char * pFileType,
    struct sqlchar * pFileTypeMod,
    char * pMsgFileName,
    short CallerAction,
    struct sqluimpt_in*  pImportInfoIn,
    struct sqluimpt_out*  pImportInfoOut,
    sqlint32 * NullIndicators,
    void * pReserved,
    struct sqlca * pSqlca);
/* ... */
 

API Parameters

DataFileNameLen
Input. A 2-byte unsigned integer representing the length in bytes of the input file name.

FileTypeLen
Input. A 2-byte unsigned integer representing the length in bytes of the input file type.

MsgFileNameLen
Input. A 2-byte unsigned integer representing the length in bytes of the message file name.

pDataFileName
Input. A string containing the path and the name of the external input file from which the data is to be imported.

pLobPathList
Input. An sqlu_media_list using media_type SQLU_LOCAL_MEDIA, and the sqlu_media_entry structure listing paths on the client where the LOB files can be found.

For more information, see SQLU-MEDIA-LIST.

pDataDescriptor
Input. Pointer to an sqldcol structure containing information about the columns being selected for import from the external file. The value of the dcolmeth field determines how the remainder of the information provided in this parameter is interpreted by the import utility. Valid values for this parameter (defined in sqlutil) are:

SQL_METH_N
Names. Selection of columns from the external input file is by column name.

SQL_METH_P
Positions. Selection of columns from the external input file is by column position.

SQL_METH_L
Locations. Selection of columns from the external input file is by column location. The database manager rejects an import call with a location pair that is invalid because of any one of the following conditions:
  • Either the beginning or the ending location is not in the range from 1 to the largest signed 2-byte integer.
  • The ending location is smaller than the beginning location.
  • The input column width defined by the location pair is not compatible with the type and the length of the target column.

A location pair with both locations equal to zero indicates that a nullable column is to be filled with NULLs.

SQL_METH_D
Default. If pDataDescriptor is NULL, or is set to SQL_METH_D, default selection of columns from the external input file is done. In this case, the number of columns and the column specification array are both ignored. The first n columns of data in the external input file are taken in their natural order, where n is the number of database columns into which the data is to be imported.

For more information, see SQLDCOL.

pActionString
Input. Pointer to an sqlchar structure containing a 2-byte long field, followed by an array of characters identifying the columns into which data is to be imported.

The character array is of the form:

   {INSERT | INSERT_UPDATE | REPLACE | CREATE | REPLACE_CREATE}
   INTO {tname[(tcolumn-list)] |
   [{ALL TABLES | (tname[(tcolumn-list)][, tname[(tcolumn-list)]])}]
   [IN] HIERARCHY {STARTING tname | (tname[, tname])}
   [UNDER sub-table-name | AS ROOT TABLE]}
   [DATALINK SPECIFICATION datalink-spec]

INSERT
Adds the imported data to the table without changing the existing table data.

INSERT_UPDATE
Adds the imported rows if their primary key values are not in the table, and uses them for update if their primary key values are found. This option is only valid if the target table has a primary key, and the specified (or implied) list of target columns being imported includes all columns for the primary key. This option cannot be applied to views.

REPLACE
Deletes all existing data from the table by truncating the table object, and inserts the imported data. The table definition and the index definitions are not changed. (Indexes are deleted and replaced if indexixf is in FileTypeMod, and FileType is SQL_IXF.) If the table is not already defined, an error is returned.

Attention: If an error occurs after the existing data is deleted, that data is lost.

CREATE
Creates the table definition and the row contents using the information in the specified PC/IXF file, if the specified table is not defined. If the file was previously exported by DB2, indexes are also created. If the specified table is already defined, an error is returned. This option is valid for the PC/IXF file format only.

REPLACE_CREATE
Replaces the table contents using the PC/IXF row information in the PC/IXF file, if the specified table is defined. If the table is not already defined, the table definition and row contents are created using the information in the specified PC/IXF file. If the PC/IXF file was previously exported by DB2, indexes are also created. This option is valid for the PC/IXF file format only.

Attention: If an error occurs after the existing data is deleted, that data is lost.

tname
The name of the table, typed table, view, or object view into which the data is to be inserted. An alias for REPLACE, INSERT_UPDATE, or INSERT can be specified, except in the case of a down-level server, when a qualified or unqualified name should be specified. If it is a view, it cannot be a read-only view.

tcolumn-list
A list of table or view column names into which the data is to be inserted. The column names must be separated by commas. If column names are not specified, column names as defined in the CREATE TABLE or the ALTER TABLE statement are used. If no column list is specified for typed tables, data is inserted into all columns within each sub-table.

sub-table-name
Specifies a parent table when creating one or more sub-tables under the CREATE option.

ALL TABLES
An implicit keyword for hierarchy only. When importing a hierarchy, the default is to import all tables specified in the traversal-order-list.

HIERARCHY
Specifies that hierarchical data is to be imported.

STARTING
Keyword for hierarchy only. Specifies that the default order, starting from a given sub-table name, is to be used.

UNDER
Keyword for hierarchy and CREATE only. Specifies that the new hierarchy, sub-hierarchy, or sub-table is to be created under a given sub-table.

AS ROOT TABLE
Keyword for hierarchy and CREATE only. Specifies that the new hierarchy, sub-hierarchy, or sub-table is to be created as a stand-alone hierarchy.

DATALINK SPECIFICATION datalink-spec
Specifies parameters pertaining to DB2 Data Links. These parameters can be specified using the same syntax as in the IMPORT command (see the Command Reference).

The tname and the tcolumn-list parameters correspond to the tablename and the colname lists of SQL INSERT statements, and have the same restrictions.

The columns in tcolumn-list and the external columns (either specified or implied) are matched according to their position in the list or the structure (data from the first column specified in the sqldcol structure is inserted into the table or view field corresponding to the first element of the tcolumn-list).

If unequal numbers of columns are specified, the number of columns actually processed is the lesser of the two numbers. This could result in an error (because there are no values to place in some non-nullable table fields) or an informational message (because some external file columns are ignored).

For more information, see SQLCHAR.

pFileType
Input. A string that indicates the format of the data within the external file. Supported external file formats (defined in sqlutil) are:

SQL_ASC
Non-delimited ASCII.

SQL_DEL
Delimited ASCII, for exchange with dBase, BASIC, and the IBM Personal Decision Series programs, and many other database managers and file managers.

SQL_IXF
PC version of the Integrated Exchange Format, the preferred method for exporting data from a table so that it can be imported later into the same table or into another database manager table.

SQL_WSF
Worksheet formats for exchange with Lotus Symphony and 1-2-3 programs.

For more information about file formats, see the "Export/Import/Load Utility File Formats" appendix in the Data Movement Utilities Guide and Reference.

pFileTypeMod
Input. A pointer to a structure containing a 2-byte long field, followed by an array of characters that specify one or more processing options. If this pointer is NULL, or the structure pointed to has zero characters, this action is interpreted as selection of a default specification.

Not all options can be used with all of the supported file types.

For more information, see SQLCHAR, and the Command Reference.

pMsgFileName
Input. A string containing the destination for error, warning, and informational messages returned by the utility. It can be the path and the name of an operating system file or a standard device. If the file already exists, it is appended to. If it does not exist, a file is created.

CallerAction
Input. An action requested by the caller. Valid values (defined in sqlutil) are:

SQLU_INITIAL
Initial call. This value must be used on the first call to the API.

If the initial call or any subsequent call returns and requires the calling application to perform some action prior to completing the requested import operation, the caller action must be set to one of the following:

SQLU_CONTINUE
Continue processing. This value can only be used on subsequent calls to the API, after the initial call has returned with the utility requesting user input (for example, to respond to an end of tape condition). It specifies that the user action requested by the utility has completed, and the utility can continue processing the initial request.

SQLU_TERMINATE
Terminate processing. This value can only be used on subsequent calls to the API, after the initial call has returned with the utility requesting user input (for example, to respond to an end of tape condition). It specifies that the user action requested by the utility was not performed, and the utility is to terminate processing the initial request.

pImportInfoIn
Input. Optional pointer to the sqluimpt_in structure containing additional input parameters. For information about this structure, see SQLUIMPT-IN.

pImportInfoOut
Output. Optional pointer to the sqluimpt_out structure containing additional output parameters. For information about this structure, see SQLUIMPT-OUT.

NullIndicators
Input. For ASC files only. An array of integers that indicate whether or not the column data is nullable. The number of elements in this array must match the number of columns in the input file; there is a one-to-one ordered correspondence between the elements of this array and the columns being imported from the data file. Therefore, the number of elements must equal the dcolnum field of the pDataDescriptor parameter. Each element of the array contains a number identifying a column in the data file that is to be used as a null indicator field, or a zero indicating that the table column is not nullable. If the element is not zero, the identified column in the data file must contain a Y or an N. A Y indicates that the table column data is NULL, and N indicates that the table column data is not NULL.

pReserved
Reserved for future use.

pSqlca
Output. A pointer to the sqlca structure. For more information about this structure, see SQLCA.

REXX API Syntax



IMPORT FROM datafile OF filetype
[MODIFIED BY :filetmod]
[METHOD {L|N|P} USING :dcoldata]
[COMMITCOUNT :commitcnt] [RESTARTCOUNT :restartcnt]
MESSAGES msgfile
{INSERT|REPLACE|CREATE|INSERT_UPDATE|REPLACE_CREATE}
INTO tname [(:columns)]
[OUTPUT INTO :output]
 
CONTINUE IMPORT
 
STOP IMPORT

REXX API Parameters

datafile
Name of the file from which the data is to be imported.

filetype
The format of the data in the external import file. The supported file formats are:

DEL
Delimited ASCII

ASC
Non-delimited ASCII

WSF
Worksheet format

IXF
PC version of Integrated Exchange Format.

filetmod
A host variable containing additional processing options (see the Command Reference).

L|N|P
A character specifying the method to be used to select columns within the external input file. Valid values are:

L
Location

N
Name

P
Position.

dcoldata
A compound REXX host variable containing information about the columns selected for import from the external input file. The content of the structure depends upon the specified method. In the following, XXX represents the name of the host variable:

tname
Name of the target table or view. Data cannot be imported to a read-only view.

columns
A REXX host variable containing the names of the columns in the table or the view into which the data is to be inserted. In the following, XXX represents the name of the host variable:

XXX.0
Number of columns.

XXX.1
First column name.

XXX.2
Second column name.

XXX.3
and so on.

msgfile
File, path, or device name where error and warning messages are to be sent.

commitcnt
Performs a COMMIT after every commitcnt records are imported.

restartcnt
Specifies that an import operation is to be started at record restartcnt + 1. The first restartcnt records are skipped.

output
A compound REXX host variable into which information from the import operation is passed. In the following, XXX represents the name of the host variable:

XXX.1
Number of records read from the external input file during the import operation.

XXX.2
Number of records skipped before inserting or updating begins.

XXX.3
Number of rows inserted into the target table.

XXX.4
Number of rows in the target table updated with information from the imported records.

XXX.5
Number of records that could not be imported.

XXX.6
Number of records imported successfully and committed to the database, including rows inserted, updated, skipped, and rejected.

Sample Programs

C
\sqllib\samples\c\impexp.sqc

COBOL
\sqllib\samples\cobol\impexp.sqb

REXX
\sqllib\samples\rexx\impexp.cmd

Usage Notes

Be sure to complete all table operations and release all locks before starting an import operation. This can be done by issuing a COMMIT after closing all cursors opened WITH HOLD, or by issuing a ROLLBACK.

The import utility adds rows to the target table using the SQL INSERT statement. The utility issues one INSERT statement for each row of data in the input file. If an INSERT statement fails, one of two actions result:

The utility performs an automatic COMMIT after the old rows are deleted during a REPLACE or a REPLACE_CREATE operation. Therefore, if the system fails, or the application interrupts the database manager after the table object is truncated, all of the old data is lost. Ensure that the old data is no longer needed before using these options.

If the log becomes full during a CREATE, REPLACE, or REPLACE_CREATE operation, the utility performs an automatic COMMIT on inserted records. If the system fails, or the application interrupts the database manager after an automatic COMMIT, a table with partial data remains in the database. Use the REPLACE or the REPLACE_CREATE option to rerun the whole import operation, or use INSERT with the restartcnt parameter set to the number of rows successfully imported.

By default, automatic COMMITs are not performed for the INSERT or the INSERT_UPDATE option. They are, however, performed if the commitcnt parameter is not zero. A full log results in a ROLLBACK.

Whenever the import utility performs a COMMIT, two messages are written to the message file: one indicates the number of records to be committed, and the other is written after a successful COMMIT. When restarting the import operation after a failure, specify the number of records to skip, as determined from the last successful COMMIT.

The import utility accepts input data with minor incompatibility problems (for example, character data can be imported using padding or truncation, and numeric data can be imported with a different numeric data type), but data with major incompatibility problems is not accepted.

One cannot REPLACE or REPLACE_CREATE an object table if it has any dependents other than itself, or an object view if its base table has any dependents (including itself). To replace such a table or a view, do the following:

  1. Drop all foreign keys in which the table is a parent.
  2. Run the import utility.
  3. Alter the table to recreate the foreign keys.

If an error occurs while recreating the foreign keys, modify the data to maintain referential integrity.

Referential constraints and foreign key definitions are not preserved when creating tables from PC/IXF files. (Primary key definitions are preserved if the data was previously exported using SELECT *.)

Importing to a remote database requires enough disk space on the server for a copy of the input data file, the output message file, and potential growth in the size of the database.

If an import operation is run against a remote database, and the output message file is very long (more than 60KB), the message file returned to the user on the client may be missing messages from the middle of the import operation. The first 30KB of message information and the last 30KB of message information are always retained.

Importing PC/IXF files to a remote database is much faster if the PC/IXF file is on a hard drive rather than on diskettes. Non-default values for pDataDescriptor, or specifying an explicit list of table columns in pActionString, makes importing to a remote database slower.

The database table or hierarchy must exist before data in the ASC, DEL, or WSF file formats can be imported; however, if the table does not already exist, IMPORT CREATE or IMPORT REPLACE_CREATE creates the table when it imports data from a PC/IXF file. For typed tables, IMPORT CREATE can create the type hierarchy and the table hierarchy as well.

PC/IXF import should be used to move data (including hierarchical data) between databases. If character data containing row separators is exported to a delimited ASCII (DEL) file and processed by a text transfer program (moving, for example between OS/2 and AIX systems), fields containing the row separators will shrink or expand. PC/IXF file format specifications permit migration of data between OS/2 (IBM Extended Services for OS/2, OS/2 Extended Edition, and DB2 for OS/2) databases and DB2 for AIX databases via export, binary copying of files between OS/2 and AIX, and import. The file copying step is not necessary if the source and the target databases are both accessible from the same client.

The data in ASC and DEL files is assumed to be in the code page of the client application performing the import. PC/IXF files, which allow for different code pages, are recommended when importing data in different code pages. If the PC/IXF file and the import utility are in the same code page, processing occurs as for a regular application. If the two differ, and the FORCEIN option is specified, the import utility assumes that data in the PC/IXF file has the same code page as the application performing the import. This occurs even if there is a conversion table for the two code pages. If the two differ, the FORCEIN option is not specified, and there is a conversion table, all data in the PC/IXF file will be converted from the file code page to the application code page. If the two differ, the FORCEIN option is not specified, and there is no conversion table, the import operation will fail. This applies only to PC/IXF files on DB2 for AIX clients.

For table objects on an 8KB page that are close to the limit of 1012 columns, import of PC/IXF data files may cause DB2 to return an error, because the maximum size of an SQL statement was exceeded. This situation can occur only if the columns are of type CHAR, VARCHAR, or CLOB. The restriction does not apply to import of DEL or ASC files.

DB2 Connect can be used to import data to DRDA servers such as DB2 for OS/390, DB2 for VM and VSE, and DB2 for OS/400. Only PC/IXF import (INSERT option) is supported. The restartcnt parameter, but not the commitcnt parameter, is also supported.

When using the CREATE option with typed tables, create every sub-table defined in the PC/IXF file; sub-table definitions cannot be altered. When using options other than CREATE with typed tables, the traversal order list enables one to specify the traverse order; therefore, the traversal order list must match the one used during the export operation. For the PC/IXF file format, one need only specify the target sub-table name, and use the traverse order stored in the file.

The import utility can be used to recover a table previously exported to a PC/IXF file. The table returns to the state it was in when exported.

Data cannot be imported to a system table, a declared temporary table, or a summary table.

Views cannot be created through the import utility.

Importing a multiple-part PC/IXF file whose individual parts are copied from an OS/2 system to an AIX system is supported on DB2.

On the Windows NT operating system:

DB2 Data Links Manager Considerations

Before running the DB2 import utility, do the following:

  1. Copy the files that will be referenced to the appropriate Data Links servers. The dlfm_import utility can be used to extract files from an archive that is generated by the dlfm_export utility.
  2. Register the required prefix names to the DB2 Data Links Managers. There may be other administrative tasks, such as registering the database, if required.
  3. Update the Data Links server information in the URLs (of the DATALINK columns) from the exported data for the SQL table, if required. (If the original configuration's Data Links servers are the same at the target location, the Data Links server names need not be updated.)
  4. Define the Data Links servers at the target configuration in the DB2 Data Links Manager configuration file.

When the import utility is executed on the target system, data related to DATALINK columns is loaded into the underlying DB2 tables using SQL INSERT (as is the case for other columns).

During the insert operation, DATALINK column processing links the files in the appropriate Data Links servers according to the column specifications at the target database.

Representation of DATALINK Information in an Input File

For a description of how DATALINK information is represented in an input file, see ***.

Table 9. Valid File Type Modifiers (Import)
Modifier Description
All File Formats
compound=x x is a number between 1 and 100 inclusive. Uses nonatomic compound SQL to insert the data, and x statements will be attempted each time.

If this modifier is specified, and the transaction log is not sufficiently large, the import operation will fail. The transaction log must be large enough to accommodate either the number of rows specified by COMMITCOUNT, or the number of rows in the data file if COMMITCOUNT is not specified. It is therefore recommended that the COMMITCOUNT option be specified to avoid transaction log overflow.

This modifier is incompatible with INSERT_UPDATE mode, hierarchical tables, and the following modifiers: usedefaults, identitymissing, identityignore, generatedmissing, and generatedignore.

generatedignore This modifier informs the import utility that data for all generated columns is present in the data file but should be ignored. This results in all values for the generated columns being generated by the utility. This modifier cannot be used with the generatedmissing modifier.
generatedmissing If this modifier is specified, the utility assumes that the input data file contains no data for the generated columns (not even NULLs), and will therefore generate a value for each row. This modifier cannot be used with the generatedignore modifier.
identityignore This modifier informs the import utility that data for the identity column is present in the data file but should be ignored. This results in all identity values being generated by the utility. The behavior will be the same for both GENERATED ALWAYS and GENERATED BY DEFAULT identity columns. This means that for GENERATED ALWAYS columns, no rows will be rejected. This modifier cannot be used with the identitymissing modifier.
identitymissing If this modifier is specified, the utility assumes that the input data file contains no data for the identity column (not even NULLs), and will therefore generate a value for each row. The behavior will be the same for both GENERATED ALWAYS and GENERATED BY DEFAULT identity columns. This modifier cannot be used with the identityignore modifier.
lobsinfile lob-path specifies the path to the files containing LOB values.
no_type_id Valid only when importing into a single sub-table. Typical usage is to export data from a regular table, and then to invoke an import operation (using this modifier) to convert the data into a single sub-table.
nodefaults If a source column for a target table column is not explicitly specified, and the table column is not nullable, default values are not loaded. Without this option, if a source column for one of the target table columns is not explicitly specified, one of the following occurs:
  • If a default value can be specified for a column, the default value is loaded
  • If the column is nullable, and a default value cannot be specified for that column, a NULL is loaded
  • If the column is not nullable, and a default value cannot be specified, an error is returned, and the utility stops processing.
usedefaults If a source column for a target table column has been specified, but it contains no data for one or more row instances, default values are loaded. Examples of missing data are:
  • For DEL files: ",," is specified for the column
  • For ASC files: The NULL indicator is set to yes for the column
  • For DEL/ASC/WSF files: A row that does not have enough columns, or is not long enough for the original specification.
Without this option, if a source column contains no data for a row instance, one of the following occurs:
  • If the column is nullable, a NULL is loaded
  • If the column is not nullable, the utility rejects the row.
ASCII File Formats (ASC/DEL)
dateformat="x" x is the format of the date in the source file.a Valid date elements are:
  YYYY - Year (four digits ranging from 0000 - 9999)
  M    - Month (one or two digits ranging from 1 - 12)
  MM   - Month (two digits ranging from 1 - 12;
            mutually exclusive with M)
  D    - Day (one or two digits ranging from 1 - 31)
  DD   - Day (two digits ranging from 1 - 31;
            mutually exclusive with D)
  DDD  - Day of the year (three digits ranging
            from 001 - 366; mutually exclusive
            with other day or month elements)

A default value of 1 is assigned for each element that is not specified. Some examples of date formats are:

   "D-M-YYYY"
   "MM.DD.YYYY"
   "YYYYDDD"
implieddecimal The location of an implied decimal point is determined by the column definition; it is no longer assumed to be at the end of the value. For example, the value 12345 is loaded into a DECIMAL(8,2) column as 123.45, not 12345.00.
noeofchar The optional end-of-file character x'1A' is not recognized as the end of file. Processing continues as if it were a normal character.
timeformat="x" x is the format of the time in the source file.a Valid time elements are:
  H     - Hour (one or two digits ranging from 0 - 12
             for a 12 hour system, and 0 - 24
             for a 24 hour system)
  HH    - Hour (two digits ranging from 0 - 12
             for a 12 hour system, and 0 - 24
             for a 24 hour system; mutually exclusive
             with H)
  M     - Minute (one or two digits ranging
             from 0 - 59)
  MM    - Minute (two digits ranging from 0 - 59;
             mutually exclusive with M)
  S     - Second (one or two digits ranging
             from 0 - 59)
  SS    - Second (two digits ranging from 0 - 59;
             mutually exclusive with S)
  SSSSS - Second of the day after midnight (5 digits
             ranging from 00000 - 86399; mutually
             exclusive with other time elements)
  TT    - Meridian indicator (AM or PM)

A default value of 0 is assigned for each element that is not specified. Some examples of time formats are:

   "HH:MM:SS"
   "HH.MM TT"
   "SSSSS"
timestampformat="x" x is the format of the time stamp in the source file.a Valid time stamp elements are:
  YYYY   - Year (four digits ranging from 0000 - 9999)
  M      - Month (one or two digits ranging
              from 1 - 12)
  MM     - Month (two digits ranging from 1 - 12;
              mutually exclusive with M, month)
  D      - Day (one or two digits ranging from 1 - 31)
  DD     - Day (two digits ranging from 1 - 31;
              mutually exclusive with D)
  DDD    - Day of the year (three digits ranging
              from 001 - 366; mutually exclusive with
              other day or month elements)
  H      - Hour (one or two digits ranging from 0 - 12
              for a 12 hour system, and 0 - 24
              for a 24 hour system)
  HH     - Hour (two digits ranging from 0 - 12
              for a 12 hour system, and 0 - 24
              for a 24 hour system; mutually exclusive
              with H)
  M      - Minute (one or two digits ranging
              from 0 - 59)
  MM     - Minute (two digits ranging from 0 - 59;
              mutually exclusive with M, minute)
  S      - Second (one or two digits ranging
              from 0 - 59)
  SS     - Second (two digits ranging from 0 - 59;
              mutually exclusive with S)
  SSSSS  - Second of the day after midnight (5 digits
              ranging from 00000 - 86399; mutually
              exclusive with other time elements)
  UUUUUU - Microsecond (6 digits ranging
              from 000000 - 999999)
  TT     - Meridian indicator (AM or PM)

A default value of 1 is assigned for unspecified YYYY, M, MM, D, DD, or DDD elements. A default value of 0 is assigned for all other unspecified elements. Following is an example of a time stamp format:

   "YYYY/MM/DD HH:MM:SS.UUUUUU"

The following example illustrates how to import data containing user defined date and time formats into a table called schedule:

   db2 import from delfile2 of del
      modified by timestampformat="yyyy.mm.dd hh:mm tt"
      insert into schedule
ASC (Non-delimited ASCII) File Format
nochecklengths If nochecklengths is specified, an attempt is made to import each row, even if the source data has a column definition that exceeds the size of the target table column. Such rows can be successfully imported if code page conversion causes the source data to shrink; for example, 4-byte EUC data in the source could shrink to 2-byte DBCS data in the target, and require half the space. This option is particularly useful if it is known that the source data will fit in all cases despite mismatched column definitions.
nullindchar=x x is a single character. Changes the character denoting a null value to x. The default value of x is Y.b

This modifier is case sensitive for EBCDIC data files, except when the character is an English letter. For example, if the null indicator character is specified to be the letter N, then n is also recognized as a null indicator.

reclen=x x is an integer with a maximum value of 32 767. x characters are read for each row, and a new-line character is not used to indicate the end of the row.
striptblanks Truncates any trailing blank spaces when loading data into a variable-length field. If this option is not specified, blank spaces are kept.

In the following example, striptblanks causes the import utility to truncate trailing blank spaces:

   db2 import from myfile.asc of asc
      modified by striptblanks
      method l (1 10, 12 15) messages msgs.txt
      insert into staff

This option cannot be specified together with striptnulls. These are mutually exclusive options.
Note:This option replaces the obsolete t option, which is supported for back-level compatibility only.

striptnulls Truncates any trailing NULLs (0x00 characters) when loading data into a variable-length field. If this option is not specified, NULLs are kept.

This option cannot be specified together with striptblanks. These are mutually exclusive options.
Note:This option replaces the obsolete padwithzero option, which is supported for back-level compatibility only.

DEL (Delimited ASCII) File Format
chardelx x is a single character string delimiter. The default value is a double quotation mark ("). The specified character is used in place of double quotation marks to enclose a character string.bc

The single quotation mark (') can also be specified as a character string delimiter. In the following example, chardel'' causes the import utility to interpret any single quotation mark (') it encounters as a character string delimiter:

   db2 "import from myfile.del of del
      modified by chardel''
      method p (1, 4) insert into staff (id, years)"
coldelx x is a single character column delimiter. The default value is a comma (,). The specified character is used in place of a comma to signal the end of a column.bc

In the following example, coldel; causes the import utility to interpret any semicolon (;) it encounters as a column delimiter:

   db2 import from myfile.del of del
      modified by coldel;
      messages msgs.txt insert into staff
datesiso Date format. Causes all date data values to be imported in ISO format.
decplusblank Plus sign character. Causes positive decimal values to be prefixed with a blank space instead of a plus sign (+). The default action is to prefix positive decimal values with a plus sign.
decptx x is a single character substitute for the period as a decimal point character. The default value is a period (.). The specified character is used in place of a period as a decimal point character.bc

In the following example, decpt; causes the import utility to interpret any semicolon (;) it encounters as a decimal point:

   db2 "import from myfile.del of del
      modified by chardel'
      decpt; messages msgs.txt insert into staff"
delprioritychar The current default priority for delimiters is: record delimiter, character delimiter, column delimiter. This modifier protects existing applications that depend on the older priority by reverting the delimiter priorities to: character delimiter, record delimiter, column delimiter. Syntax:
   db2 import ... modified by delprioritychar ...

For example, given the following DEL data file:

   "Smith, Joshua",4000,34.98<row delimiter>
   "Vincent,<row delimiter>, is a manager", ...
   ... 4005,44.37<row delimiter>

With the delprioritychar modifier specified, there will be only two rows in this data file. The second <row delimiter> will be interpreted as part of the first data column of the second row, while the first and the third <row delimiter> are interpreted as actual record delimiters. If this modifier is not specified, there will be three rows in this data file, each delimited by a <row delimiter>.

dldelx x is a single character DATALINK delimiter. The default value is a semicolon (;). The specified character is used in place of a semicolon as the inter-field separator for a DATALINK value. It is needed because a DATALINK value may have more than one sub-value. bc
Note:x must not be the same character specified as the row, column, or character string delimiter.
keepblanks Preserves the leading and trailing blanks in each field of type CHAR, VARCHAR, LONG VARCHAR, or CLOB. Without this option, all leading and trailing blanks that are not inside character delimiters are removed, and a NULL is inserted into the table for all blank fields.
nodoubledel Suppresses recognition of double character delimiters. For more information, see Delimiter Restrictions.
IXF File Format
forcein Directs the utility to accept data despite code page mismatches, and to suppress translation between code pages.

Fixed length target fields are checked to verify that they are large enough for the data. If nochecklengths is specified, no checking is done, and an attempt is made to import each row.

indexixf Directs the utility to drop all indexes currently defined on the existing table, and to create new ones from the index definitions in the PC/IXF file. This option can only be used when the contents of a table are being replaced. It cannot be used with a view, or when a insert-column is specified.
indexschema=schema Uses the specified schema for the index name during index creation. If schema is not specified (but the keyword indexschema is specified), uses the connection user ID. If the keyword is not specified, uses the schema in the IXF file.
nochecklengths If nochecklengths is specified, an attempt is made to import each row, even if the source data has a column definition that exceeds the size of the target table column. Such rows can be successfully imported if code page conversion causes the source data to shrink; for example, 4-byte EUC data in the source could shrink to 2-byte DBCS data in the target, and require half the space. This option is particularly useful if it is known that the source data will fit in all cases despite mismatched column definitions.

Notes:

  1. The import utility does not issue a warning if an attempt is made to use unsupported file types with the MODIFIED BY option. If this is attempted, the import operation fails, and an error code is returned.

  2. a Double quotation marks around the date format string are mandatory. Field separators cannot contain any of the following: a-z, A-Z, and 0-9. The field separator should not be the same as the character delimiter or field delimiter in the DEL file format. A field separator is optional if the start and end positions of an element are unambiguous. Ambiguity can exist if (depending on the modifier) elements such as D, H, M, or S are used, because of the variable length of the entries.

    For time stamp formats, care must be taken to avoid ambiguity between the month and the minute descriptors, since they both use the letter M. A month field must be adjacent to other date fields. A minute field must be adjacent to other time fields. Following are some ambiguous time stamp formats:

       "M" (could be a month, or a minute)
       "M:M" (Which is which?)
       "M:YYYY:M" (Both are interpreted as month.)
       "S:M:YYYY" (adjacent to both a time value and a date value)
    

    In ambiguous cases, the utility will report an error message, and the operation will fail.

    Following are some unambiguous time stamp formats:

       "M:YYYY" (Month)
       "S:M" (Minute)
       "M:YYYY:S:M" (Month....Minute)
       "M:H:YYYY:M:D" (Minute....Month)
    
    Note:Some characters, such as double quotation marks and back slashes, must be preceded by an escape character (for example, \).

  3. b The character must be specified in the code page of the source data.

    The character code point (instead of the character symbol), can be specified using the syntax xJJ or 0xJJ, where JJ is the hexadecimal representation of the code point. For example, to specify the # character as a column delimiter, use one of the following:

       ... modified by coldel# ...
       ... modified by coldel0x23 ...
       ... modified by coldelX23 ...
    

  4. c Delimiter Restrictions lists restrictions that apply to the characters that can be used as delimiter overrides.

See Also

sqluexpr - Export

sqluload - Load.


[ Top of Page | Previous Page | Next Page ]