Data Pump import (impdp
) is the tool used to perform the data import operation, it reads the data from a file created by Data Pump export. This tool can work in different modes such as:
Full import mode: This is the default operation mode. This mode imports the entire contents of the source dump file, and you must have the
IMP_FULL_DATABASE
role granted if the export operation required theEXP_FULL_DATABASE
role.Schema mode: A schema import is specified using the
SCHEMAS
parameter. In a schema import, only objects owned by the current user are loaded. You must have theIMP_FULL_DATABASE
role in case you are planning to import schemas you don't own.Table mode: This mode specifies the tables, table partitions and the dependent objects to be imported. If the
expdp
command required theEXP_FULL_DATABASE
privilege to generate the dump file, then you will require theIMP_FULL_DATABASE
to perform the import operation.Tablespace mode: In this mode all objects contained within the specified set of tablespaces are loaded.
Transportable tablespace mode: The transportable tablespace mode imports the previously exported metadata to the target database; this allows you to plug in a set of data files to the destination database.
Network mode: This mode allows the user to perform an import operation on the fly with no intermediate dump file generated; this operation mode is useful for the one time load operations.
The Data Pump import tool provides three different interfaces:
Command Line Interface: This is the default operation mode. In this mode the user provides no further parameters once the job is started. The only way to manage or modify running parameters afterwards is by entering interactive mode from another Data Pump session.
Interactive Command Interface: This prompt is similar to the interactive
expdp
prompt, this allows you to manage and modify the parameters of a running job.Parameter File Interface: This enables you to specify command-line parameters in a parameter file. The
PARFILE
parameter must be specified in the command line.
One of the most interesting import data pump features is the REMAP
function. This function allows the user to easily redefine how an object will be stored in the database. It allows us, amongst many other things, to specify if the tables to be loaded will be remapped against another schema (REMAP_SCHEMA
). It also changes the tablespace where the segment will be stored (REMAP_TABLESPACE
). In case of a full data pump import, the function can also remap where the database files will be created by means of the REMAP_DATAFILE
keyword.
Let's show the REMAP_SCHEMA
facility. It is common practice to have a user's schema cloned for testing or development environments. So let's assume the HR
schema is to be used by a recently created HRDEV
user, and it requires all the HR
schema objects mapped in its schema.
Create the HRDEV
user. In this case the user HRDEV
is created with the RESOURCE role granted. This is only for demonstration purposes, you should only grant the minimum required privileges for your production users.
SQL> create user HRDEV ident
2 identified by ORACLE
3 default tablespace USERS;
User created.
SQL> grant CONNECT, RESOURCE to HRDEV;
Grant succeeded.
Export the HR Schema objects using the following command:
$ expdp system/oracle schemas=HR dumpfile=DEFAULT_DP_DEST:hrSchema logfile=DEFAULT_LOG_DEST:hrSchema
Import the HR
Schema objects and remap them to the HRDEV
user's schema. Using the following command:
$ impdp system/oracle \
dumpfile=DEFAULT_DP_DEST:hrSchema \
logfile=DEFAULT_LOG_DEST:hrSchema \ REMAP_SCHEMA=HR:HRDEV
The import session runs as follows:
The HRDEV
schema automatically inherits, by means of a cloning process (REMAP_SCHEMA
), 35 objects from the HR
schema, which includes tables, views, sequences, triggers, procedures, and indexes.
One of the most interesting data pump features is the network mode, which allows a database to receive the data directly from the source without generating an intermediate dump file. This is convenient as it saves space and allows a networked pipeline communication between the source and the destination database.
The network import mode is started when the parameter NETWORK_LINK
is added to the impdp
command, this parameter references a valid database link that points to the source database. This link is used to perform the connection with a valid user against the source database. A simple CREATE DATABASE LINK
command is required to setup the source database link at the target database.
.It can be seen that the import operation takes place at the 11g database; meanwhile the export is taken from a 10g Release 1 database by network mode using a database link created on the 11g side. This example is a classical data migration from a lower to a higher version using a one-time export operation.
The source database is 10.1.0.5.0 (A), and the destination database version is 11.1.0.6.0 (C). There is a database link named db10gR1 (B) on the 11g database. In order for this export to work it is important to consider version compatibility. In network mode the source database must be an equal or lower version than the destination database, and the database link can be either public, fixed user, or connected user, but not current user. Another restriction of the data pump network mode is the filtering option; only full tables can be transferred, not partial table contents.
At the target site a new database link is created:
CREATE DATABASE LINK DB10GR1 CONNECT TO <username> IDENTIFIED BY <password> using <TNSAlias>;
This alias is used at import time:
impdp <username>/<password> network_link=<DBLink> tables=<List of Tables to Import> logfile=<Directory Object>:file_name
The network import mode provides a practical approach for one-time data transfers. It is convenient and reduces the intermediate file management that is usually required.