What Is Data Pump Import?
Data Pump Import (hereinafter referred to as Import for ease of reading) is
a utility for loading an export dump file set into a target system. The dump
file set is made up of one or more disk files that contain table data, database
object metadata, and control information. The files are written in a
proprietary, binary format. During an import operation, the Data Pump Import
utility uses these files to locate each database object in the dump file set. Import
can also be used to load a target database directly from a source database with
no intervening dump files. This allows export and import operations to run
concurrently, minimizing total elapsed time. This is known as a network import.
Data Pump Import enables you to specify whether a job should move a subset
of the data and metadata from the dump file set or the source database (in the
case of a network import), as determined by the import mode. This is done using
data filters and metadata filters, which are implemented
through Import commands.
Invoking Data Pump Import
The Data Pump Import utility is invoked using the
impdp
command. The characteristics of the import
operation are determined by the import parameters you specify. These parameters
can be specified either on the command line or in a parameter file.
Note: Do
not invoke Import as
SYSDBA,
except at the request of Oracle technical support. SYSDBA
is used internally and has specialized functions;
its behavior is not the same as for general users.
Be aware that if you are performing a Data Pump Import into a table or
tablespace created with the
NOLOGGING
clause enabled, a redo log file may still
be generated. The redo that is generated in such a case is generally related to
underlying recursive space transactions, data dictionary changes, and index
maintenance for indices on the table that require logging.
It is not possible to start or restart Data Pump jobs on one instance in a
Real Application Clusters (RAC) environment if there are Data Pump jobs
currently running on other instances in the RAC environment.
Data Pump Import Interfaces
You can interact with Data Pump Import by using a command line, a parameter
file, or an interactive-command mode.
- Command-Line Interface: Enables you
to specify the Import parameters directly on the command line. For a
complete description of the parameters available in the command-line
interface.
- Parameter
File Interface: Enables you to specify command-line parameters in a
parameter file. The only exception is the
PARFILE
parameter because parameter files cannot be nested. The use of parameter files is recommended if you are using parameters whose values require quotation marks. - Interactive-Command
Interface: Stops logging to the terminal and displays the Import prompt,
from which you can enter various commands, some of which are specific to
interactive-command mode. This mode is enabled by pressing Ctrl+C during
an import operation started with the command-line interface or the
parameter file interface. Interactive-command mode is also enabled when you
attach to an executing or stopped job.
Data Pump Import Modes
One of the most significant characteristics of an import operation is its
mode, because the mode largely determines what is imported. The specified mode
applies to the source of the operation, either a dump file set or another
database if the
NETWORK_LINK
parameter is specified.
When the source of the import operation is a dump file set, specifying a
mode is optional. If no mode is specified, then Import attempts to load the
entire dump file set in the mode in which the export operation was run.
Full Import Mode
A full import is specified using the
FULL
parameter. In full import mode, the entire
content of the source (dump file set or another database) is loaded into the
target database. This is the default for file-based imports. You must have the IMP_FULL_DATABASE
role if the source is another database.
Cross-schema references are not imported for non-privileged users. For
example, a trigger defined on a table within the importing user's schema, but
residing in another user's schema, is not imported.
The
IMP_FULL_DATABASE
role is required on the
target database and the EXP_FULL_DATABASE
role is required on the source database if the NETWORK_LINK parameter is used for a full import.
Schema Mode
A schema import is specified using the
SCHEMAS
parameter. In a schema import, only objects owned
by the specified schemas are loaded. The source can be a full, table,
tablespace, or schema-mode export dump file set or another database. If you
have the IMP_FULL_DATABASE
role, then a list of schemas can be specified and
the schemas themselves (including system privilege grants) are created in the
database in addition to the objects contained within those schemas.
Cross-schema references are not imported for non-privileged users unless
the other schema is remapped to the current schema. For example, a trigger
defined on a table within the importing user's schema, but residing in another
user's schema, is not imported.
Table Mode
A table-mode import is specified using the
TABLES
parameter. In table mode, only the specified set
of tables, partitions, and their dependent objects are loaded. The source can
be a full, schema, tablespace, or table-mode export dump file set or another
database. You must have the IMP_FULL_DATABASE
role to specify tables that are not in
your own schema.
Tablespace Mode
A tablespace-mode import is specified using the
TABLESPACES
parameter. In tablespace mode, all objects
contained within the specified set of tablespaces are loaded, along with the
dependent objects. The source can be a full, schema, tablespace, or table-mode
export dump file set or another database. For unprivileged users, objects not
remapped to the current schema will not be processed.
Transportable Tablespace Mode
A transportable tablespace import is specified using
the
TRANSPORT_TABLESPACES
parameter. In
transportable tablespace mode, the metadata from a transportable tablespace
export dump file set or from another database is loaded. The datafiles
specified by the TRANSPORT_DATAFILES
parameter must be made
available from the source system for use in the target database, typically by
copying them over to the target system. This mode requires the IMP_FULL_DATABASE
role.
Network Considerations
You can specify a connect identifier in the connect string when you invoke
the Data Pump Import utility. This identifier can specify a database instance
that is different from the current instance identified by the current Oracle
System ID (SID). The connect identifier can be an Oracle*Net connect descriptor
or a name that maps to a connect descriptor. This requires an active listener
(to start the listener, enter
lsnrctl
start
) that can be located using the connect descriptor.
The
NETWORK_LINK
parameter initiates a network import. This
means that the impdp
client initiates the import request. The
server for that request contacts the remote source database referenced by the
database link in the NETWORK_LINK
parameter, retrieves the data, and writes
it directly back to the local database. There are no dump files involved.
Filtering During Import Operations
Data Pump Import
provides much greater data and metadata filtering capability than was provided
by the original Import utility.
Data filters specify restrictions on the rows that
are to be imported. These restrictions can be based on partition names and on
the results of subqueries. Each data filter can only be specified once per
table and once per job. If different filters using the same name are applied to
both a particular table and to the whole job, the filter parameter supplied for
the specific table will take precedence.
Data Pump Import provides much greater metadata filtering capability than
was provided by the original Import utility. Metadata filtering is implemented
through the
EXCLUDE
and INCLUDE
parameters. The EXCLUDE
and INCLUDE
parameters are mutually exclusive.
Metadata filters identify a set of objects to be included or excluded from
a Data Pump operation. For example, you could request a full import, but
without Package Specifications or Package Bodies.
To use filters correctly and to get the results you expect, remember that dependent objects of an identified object are processed along with
the identified object. For
example, if a filter specifies that a package is to be included in an
operation, then grants upon that package will also be included. Likewise, if a
table is excluded by a filter, then indexes, constraints, grants, and triggers
upon the table will also be excluded by the filter. To see which objects can be
filtered, you can perform queries on the following views:
DATABASE_EXPORT_OBJECTS
, SCHEMA_EXPORT_OBJECTS
, and TABLE_EXPORT_ OBJECTS
.
ATTACH
When you specify the
ATTACH
parameter, you cannot specify any other
parameters except for the connection string (user/password). You cannot attach to a job in another schema
unless it is already running. If the dump file set or master table for the job
have been deleted, the attach operation will fail. Altering the master table in
any way will lead to unpredictable results.
Enables you to
filter what is loaded during the import operation.
CONTENT={ALL | DATA_ONLY | METADATA_ONLY}
DUMPFILE : Specifies the names and
optionally, the directory objects of the dump file set that was created by
Export.
ESTIMATE={BLOCKS | STATISTICS}
Changes the name of the source datafile to the target
datafile name in all SQL statements where the source datafile is referenced:
CREATE TABLESPACE
, CREATE LIBRARY
,
and CREATE DIRECTORY
.REMAP_DATAFILE=source_datafile:target_datafile
Remapping datafiles is useful when you move databases between platforms
that have different file naming conventions. The source_datafile and target_datafile names should be exactly as you want them
to appear in the SQL statements where they are referenced. Oracle recommends
that you enclose datafile names in quotation marks to eliminate ambiguity on
platforms for which a colon is a valid file specification character.
DIRECTORY=dpump_dir1 FULL=Y DUMPFILE=db_full.dmp
REMAP_DATAFILE='DB1$:[HRDATA.PAYROLL]tbs6.f':'/db1/hrdata/payroll/tbs6.f'
REMAP_SCHEMA=source_schema:target_schema
Multiple
REMAP_SCHEMA
lines can be specified, but the source
schema must be different for each one. However, different source schemas can
map to the same target schema. The mapping may not be 100 percent complete,
because there are certain schema references that Import is not capable of
finding. For example, Import will not find schema references embedded within
the body of definitions of types, views, procedures, and packages.> expdp SYSTEM/password SCHEMAS=hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp
> impdp SYSTEM/password DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp REMAP_SCHEMA=hr:scott
In this example, if user
scott
already exists before the import, then
the Import REMAP_SCHEMA
command will add objects from the hr
schema into the existing scott
schema. You can connect to the scott
schema after the import by using the existing
password (without resetting it).
If user
scott
does not exist before you execute the
import operation, Import automatically creates it with an unusable password.
This is possible because the dump file, hr
.dmp
, was created by SYSTEM,
which has the privileges necessary to create a dump file that contains the
metadata needed to create a schema. However, you cannot connect to scott
on completion of the import, unless you reset the
password for scott
on the target database after the import
completes.
REMAP_TABLESPACE: Default: none. Remaps all objects selected for import with persistent data in the source
tablespace to be created in the target tablespace.
REMAP_TABLESPACE=source_tablespace:target_tablespace
Multiple
REMAP_TABLESPACE
parameters can be specified, but no two can have
the same source tablespace. The target schema must have sufficient quota in the
target tablespace.> impdp hr/hr REMAP_TABLESPACE=tbs_1:tbs_6 DIRECTORY=dpump_dir1 DUMPFILE=employees.dmp
REUSE_DATAFILES: Default:
n.
Specifies whether or not the import job should reuse existing datafiles for
tablespace creation.REUSE_DATAFILES={y | n}
If the default (
n
) is used and the datafiles specified in CREATE TABLESPACE
statements already exist, an error message from the failing CREATE TABLESPACE
statement is issued, but the import job
continues. If this parameter is specified as y
, the existing datafiles are reinitialized. Be aware that specifying Y
can result in a loss of data.> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp LOGFILE=reuse.log REUSE_DATAFILES=Y
REMAPPING DATA:
There is a new parameter that allows you during export or import to modify the input or output data based on your remapping scheme. The REMAP_DATA parameter specifies a remap function that takes as a source the original value of the designated column and returns a remapped value that will replace the original value in the dump file. The syntax of the using the parameter is as follows:
REMAP_DATA=[schema.]tablename.column_name:[schema.]pkg.function
expdp hr/passwd DIRECTORY=dp_dir DUMPFILE=remap.dmp
TABLES=hr.employees REMAP_DATA=hr.employees.last_name:hr.remap_pckg.modifychar
RENAMING TABLES DURING EXPORT OR IMPORT:
In Oracle 11g,
the Data Pump allows you to rename a table
during the import process with the
REMAP_TABLE parameter. Syntax of this parameter is as follows:
REMAP_TABLE=[schema.]old_tablename[.partition]:new_tablename
Following are
examples of using this parameter
impdp
dumpfile=dp_dir:docs.dmp REMAP_TABLE=hr.docs:docs2 userid=hr/password
impdp dumpfile=dp_dir:docs.dmp
REMAP_TABLE=hr.docs.part1:docs3 userid=hr/password
Note Tables will not be remapped if they already
exist even if the TABLE_EXISTS_ACTION is
set to
TRUNCATE or APPEND.
IGNORING NONDEFERRED CONSTRAINTS:
In Oracle 11g, setting the DATA_OPTIONS parameter
to SKIP_CONSTRAINT_ERRORS will cause the import program to skip errors generated by the nondeferred
database constraints. In the case of deferred constraints, imports will always be rolled back.
impdp Robert/robert DIRECTORY=data_pump_dir DUMPFILE=remap.dmp
tables=ROBERT.NAMES data_options=SKIP_CONSTRAINT_ERRORS
SCHEMAS=schema_name [,...]
If you have the
IMP_FULL_DATABASE
role, you can use this parameter to
perform a schema-mode import by specifying a list of schemas
to import. First, the schemas themselves are created (if they do not already
exist), including system and role grants, password history, and so on. Then all
objects contained within the schemas are imported. Nonprivileged users can
specify only their own schemas or schemas remapped to their own schemas. In
that case, no information about the schema definition is imported, only the
objects contained within it.
Schema-mode is
the default mode when you are performing a network-based import.
> impdp hr/hr SCHEMAS=hr DIRECTORY=dpump_dir1 LOGFILE=schemas.log DUMPFILE=expdat.dmp
SKIP_UNUSABLE_INDEXES: Default:
the value of the Oracle Database configuration parameter,
SKIP_UNUSABLE_INDEXES
. Specifies whether or not Import skips
loading tables that have indexes that were set to the
Index Unusable state (by either the system or the user).SKIP_UNUSABLE_INDEXES={y | n} whose default value is Y
If
SKIP_UNUSABLE_INDEXES
is set to y
, and a table or partition with an index in the Unusable state is
encountered, the load of that table or partition proceeds anyway, as if the
unusable index did not exist.
If
SKIP_UNUSABLE_INDEXES
is set to n
, and a table or partition with an index in the Unusable state is
encountered, that table or partition is not loaded. Other tables, with indexes
not previously set Unusable, continue to be updated as rows are inserted.
Note: This parameter is useful only when importing data into an existing table.
It has no practical effect when a table is created as part of an import because
in that case, the table and indexes are newly created and will not be marked
unusable.
> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp LOGFILE=skip.log SKIP_UNUSABLE_INDEXES=y
SQLFILE: Default: none. Specifies a file into which all of the SQL DDL that Import would have
executed, based on other parameters, is written.
SQLFILE=[directory_object:]file_name
The file_name specifies where the import job will write the DDL
that would be executed during the job. The SQL is not actually executed, and
the target system remains unchanged. The file is written to the directory
object specified in the
DIRECTORY
parameter, unless another directory_object is explicitly specified here. Any existing file
that has a name matching the one specified with this parameter is overwritten.
Note that
passwords are not included in the SQL file. For example, if a
CONNECT
statement is part of the DDL that was executed,
it will be replaced by a comment with only the schema name shown. In the
following example, the dashes indicate that a comment follows, and the hr
schema name is shown, but not the password.-- CONNECT hr
Therefore, before
you can execute the SQL file, you must edit it by removing the dashes
indicating a comment and adding the password for the
hr
schema (in this case, the password is also hr
), as follows:CONNECT hr/hr
> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp
SQLFILE=dpump_dir2:expfull.sql
STATUS: Default:
0.
STREAMS_CONFIGURATION: Default:
y.
Specifies whether or not to import any general Streams metadata that may be
present in the export dump file.STREAMS_CONFIGURATION={y | n}
> impdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp STREAMS_CONFIGURATION=n
TABLE_EXISTS_ACTION: Default:
SKIP
(Note that if CONTENT
=DATA_ONLY
is specified, the default is APPEND
, not SKIP
.). Tells
Import what to do if the table it is trying to create
already exists.TABLE_EXISTS_ACTION={SKIP | APPEND | TRUNCATE | REPLACE}
The possible
values have the following effects:
SKIP
leaves the table as is and moves on to the next object. This is not a valid option if theCONTENT
parameter is set toDATA_ONLY
.APPEND
loads rows from the source and leaves existing rows unchanged.TRUNCATE
deletes existing rows and then loads rows from the source.REPLACE
drops the existing table and then creates and loads it from the source. This is not a valid option if theCONTENT
parameter is set toDATA_ONLY
.
The following
considerations apply when you are using these options:
- When you use
TRUNCATE
orREPLACE
, make sure that rows in the affected tables are not targets of any referential constraints. - When you
use
SKIP
,APPEND
, orTRUNCATE
, existing table-dependent objects in the source, such as indexes, grants, triggers, and constraints, are ignored. ForREPLACE
, the dependent objects are dropped and re-created from the source, if they were not explicitly or implicitly excluded (usingEXCLUDE
) and they exist in the source dump file or system. - When you use
APPEND
orTRUNCATE
, checks are made to ensure that rows from the source are compatible with the existing table prior to performing any action. - The
existing table is loaded using the external tables access method if the
existing tables have active constraints and triggers. However, be aware
that if any row violates an active constraint, the load fails and no data
is loaded. If you have data that must be loaded, but may cause constraint
violations, consider disabling the constraints, loading the data, and then
deleting the problem rows before reenabling the constraints.
- When you
use
APPEND
, the data is always loaded into new space; existing space, even if available, is not reused. For this reason, you may wish to compress your data after the load.
TRUNCATE
cannot be used on clustered tables or
over network links.> impdp hr/hr TABLES=employees DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp TABLE_EXISTS_ACTION=REPLACE
TRANSFORM:
Default: none. Enables you to alter object creation DDL for
specific objects, as well as for all applicable objects being loaded.
TRANSFORM = transform_name:value[:object_type]
The transform_name specifies the name of the transform. The possible
options are as follows:
SEGMENT_ATTRIBUTES
- If the value is specified asy
, then segment attributes (physical attributes, storage attributes, tablespaces, and logging) are included, with appropriate DDL. The default isy
.STORAGE
- If the value is specified asy
, the storage clauses are included, with appropriate DDL. The default isy
. This parameter is ignored ifSEGMENT_ATTRIBUTES
=n
.OID
- If the value is specified asn
, the assignment of the exported OID during the creation of object tables and types is inhibited. Instead, a new OID is assigned. This can be useful for cloning schemas, but does not affect referenced objects. The default value isy
.PCTSPACE
- Thevalue
supplied for this transform must be a number greater than zero. It represents the percentage multiplier used to alter extent allocations and the size of data files.
Note that you can use the PCTSPACE transform in conjunction with the Data
Pump Export
SAMPLE
parameter so that the size of storage
allocations matches the sampled data subset. The type of value specified depends on the transform used. Boolean values (y/n) are required
for the SEGMENT_ATTRIBUTES, STORAGE, and OID transforms. Integer values are
required for the PCTSPACE transform.> impdp hr/hr TABLES=hr.employees \
DIRECTORY=dpump_dir1 DUMPFILE=hr_emp.dmp \
TRANSFORM=SEGMENT_ATTRIBUTES:n:table
TRANSPORT_DATAFILES:
Default: none. Specifies a list of datafiles to be imported into the target database by a
transportable-mode import. The files must already have been copied from the
source database system.
TRANSPORT_DATAFILES=datafile_name
The datafile_name must include an absolute directory path
specification (not
a directory object name) that is valid on the system where the target database
resides.
DIRECTORY=dpump_dir1 DUMPFILE=tts.dmp
TRANSPORT_DATAFILES='/user01/data/tbs1.f'
TRANSPORT_FULL_CHECK: Default:
n.
Specifies whether or not to verify that the specified transportable
tablespace set has no dependencies.TRANSPORT_FULL_CHECK={y | n}
If
TRANSPORT_FULL_CHECK
=y
, then Import verifies that there are no dependencies
between those objects inside the transportable set and those outside the
transportable set. The check addresses two-way dependencies. For example, if a
table is inside the transportable set but its index is not, a failure is
returned and the import operation is terminated. Similarly, a failure is also
returned if an index is in the transportable set but the table is not.
If
TRANSPORT_FULL_CHECK
=n,
then Import verifies only that there are no objects within the
transportable set that are dependent on objects outside the transportable set.
This check addresses a one-way dependency. For example, a table is not
dependent on an index, but an index is dependent on a table, because an index without a table
has no meaning. Therefore, if the transportable set contains a table, but not
its index, then this check succeeds. However, if the transportable set contains
an index, but not the table, the import operation is terminated.
TRANSPORT_TABLESPACES: Default: none. Specifies that you want to
perform a transportable-tablespace-mode import.
TRANSPORT_TABLESPACES=tablespace_name [, ...]
Use the
TRANSPORT_TABLESPACES
parameter to specify a list of tablespace names for which object metadata will be imported from the
source database into the target database.DIRECTORY=dpump_dir1 NETWORK_LINK=source_database_link
TRANSPORT_TABLESPACES=tbs_6 TRANSPORT_FULL_CHECK=n
TRANSPORT_DATAFILES='user01/data/tbs6.f'
No comments:
Post a Comment
Share your knowledge it really improves, don't show off...