Data Pump
Export(Unit II)
What Is Data Pump Export?
Data Pump Export is a utility for unloading data and metadata into a set of
operating system files called a dump
file set. The dump file set can be imported only by the Data Pump Import
utility. The dump file set can be imported on the same system or it can be
moved to another system and loaded there.
The dump file set is made up of one or more disk files
that contain table data, database object metadata, and control information. The
files are written in a proprietary, binary format. During an import operation,
the Data Pump Import utility uses these files to locate each database object in
the dump file set.
Because the dump files are written by the server, rather than by the
client, the data base administrator (DBA) must create directory objects. Data
Pump Export enables you to specify that a job should move a subset of the data
and metadata, as determined by the export mode.
Invoking Data Pump Export
The Data Pump Export utility is invoked using the
expdp
command. The characteristics of the export
operation are determined by the Export parameters you specify. These parameters
can be specified either on the command line or in a parameter file.
Note:
- Do not
invoke Export as
SYSDBA,
except at the request of Oracle technical support.SYSDBA
is used internally and has specialized functions; its behavior is not the same as for general users. - It is not
possible to start or restart Data Pump jobs on one instance in a Real
Application Clusters (RAC) environment if there are Data Pump jobs
currently running on other instances in the RAC environment.
Data Pump Export Interfaces
You can interact with Data Pump Export by using a command line, a parameter
file, or an interactive-command mode.
- Command-Line
Interface: Enables you to specify most of the Export parameters directly
on the command line. For a complete description of the parameters available
in the command-line interface.
- Parameter
File Interface: Enables you to specify command-line parameters in a
parameter file. The only exception is the
PARFILE
parameter, because parameter files cannot be nested. The use of parameter files is recommended if you are using parameters whose values require quotation marks. - Interactive-Command
Interface: Stops logging to the terminal and displays
the Export prompt, from which you can enter various commands, some of
which are specific to interactive-command mode. This mode is enabled by
pressing Ctrl+C during an export
operation started with the command-line interface or the parameter file
interface. Interactive-command mode is also enabled when you attach to an
executing or stopped job.
Data
Pump Export Modes
Full Export Mode
A full export is specified using the
FULL
parameter. In a full database export, the entire database
is unloaded. This mode requires that you have the EXP_FULL_DATABASE
role.
Schema Mode
A schema export is specified using the
SCHEMAS
parameter. This is the default export mode. If
you have the EXP_FULL_DATABASE
role, then you can specify a list of
schemas and optionally include the schema definitions themselves, as well as
system privilege grants to those schemas. If you do not have the EXP_FULL_DATABASE
role, you can export only your own schema.
Cross-schema references are not exported unless the referenced schema is
also specified in the list of schemas to be exported. For example, a trigger
defined on a table within one of the specified schemas, but that resides in a
schema not explicitly specified, is not exported. This is also true for
external type definitions upon which tables in the specified schemas depend. In
such a case, it is expected that the type definitions already exist in the
target instance at import time.
Table Mode
A table export is specified using the
TABLES
parameter. In table mode, only a specified set of
tables, partitions, and their dependent objects are unloaded. You must have the
EXP_FULL_DATABASE
role to specify tables that are not in your own
schema. All specified tables must reside in a single schema. Note that type
definitions for columns are not
exported in table mode. It is expected that the type definitions already exist
in the target instance at import time. Also, as in schema exports, cross-schema
references are not exported.
Tablespace Mode
A tablespace export is specified using the
TABLESPACES
parameter. In tablespace mode, only the tables
contained in a specified set of tablespaces are unloaded. If a table is
unloaded, its dependent objects are also unloaded. Both object metadata and
data are unloaded. In tablespace mode, if any part of a table resides in the
specified set, then that table and all of its dependent objects are exported.
Privileged users get all tables. Nonprivileged users get only the tables in
their own schemas.
Transportable Tablespace Mode
A transportable tablespace export is specified using
the
TRANSPORT_TABLESPACES
parameter. In transportable tablespace
mode, only the metadata for the tables (and their dependent objects) within a
specified set of tablespaces are unloaded. This allows the tablespace datafiles
to then be copied to another Oracle database and incorporated using
transportable tablespace import. This mode requires that you have the EXP_FULL_DATABASE
role.
Unlike tablespace mode, transportable tablespace mode requires that the
specified tables be completely self-contained. That is, the partitions of all
tables in the set must also be in the set. Transportable tablespace exports cannot be restarted
once stopped. Also, they cannot have a degree of parallelism greater than 1.
Network Considerations
You can specify a connect identifier in the connect string when you invoke
the Data Pump Export utility. This identifier can specify a database instance
that is different from the current instance identified by the current Oracle
System ID (SID). The connect identifier can be an Oracle*Net connect descriptor
or a name that maps to a connect descriptor. This requires an active listener
(to start the listener, enter
lsnrctl
start
) that can be located using the connect descriptor. The following example
invokes Export for user hr
, using the connect descriptor named inst1
:expdp hr/hr@inst1 DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp TABLES=employees
The local Export
client connects to the database instance identified by the connect descriptor
inst1
to export
the data on that instance.
Do not confuse invoking the Export utility using a connect identifier with
an export operation specifying the Export
NETWORK_LINK
command-line parameter, which initiates an export
via a database link. In this case, the local Export client connects to the
database instance identified by the command-line connect string, retrieves the
data to be exported from the database instance identified by the database link,
and writes the data to a dump file set on the connected database instance.
Filtering During Export Operations
Data Pump Export
provides much greater data and metadata filtering capability than was provided
by the original Export utility.
Data specific filtering is implemented through the
QUERY
and SAMPLE
parameters,
which specify restrictions on the table rows that are to be exported. Data
filtering can also occur indirectly as a result of metadata filtering, which
can include or exclude table objects along with any associated row data. Each data filter can be specified once per table within a job. If
different filters using the same name are applied to both a particular table
and to the whole job, the filter parameter supplied for the specific table will
take precedence.
Metadata
filtering is implemented through the
EXCLUDE
and INCLUDE
parameters. The EXCLUDE
and INCLUDE
parameters are mutually exclusive. Metadata
filters identify a set of objects to be included or excluded from an Export or
Import operation. For example, you could request a full export, but without
Package Specifications or Package Bodies.
To use filters correctly and to get the results you expect, remember that dependent objects of an identified object are processed along with
the identified object. For example, if a filter specifies that an index is to be included in an
operation, then statistics from that index will also be included. Likewise, if
a table is excluded by a filter, then indexes, constraints, grants, and
triggers upon the table will also be excluded by the filter.
If multiple filters are specified for an object type, an implicit
AND
operation is applied to them. That is, objects
pertaining to the job must pass all
of the filters applied to their object types. The same metadata filter name can
be specified multiple times within a job.
To see which
objects can be filtered, you can perform queries on the following views:
DATABASE_EXPORT_OBJECTS
, SCHEMA_EXPORT_OBJECTS
, and TABLE_EXPORT_ OBJECTS.
For
example, you could perform the following query:
SQL> SELECT OBJECT_PATH, COMMENTS FROM SCHEMA_EXPORT_ OBJECTS WHERE OBJECT_PATH LIKE '%GRANT' AND OBJECT_PATH NOT LIKE '%/%';
Parameters Available in Export's Command-Line Mode
ATTACH
: job currently in the user's schema, if there
is only one. Attaches the client session to an existing
export job and automatically places you in the
interactive-command interface. Export displays a description of the job to
which you are attached and also displays the Export prompt.
ATTACH [=[schema_name.]job_name]
The schema_name is optional. To specify a schema other
than your own, you must have the
EXP_FULL_DATABASE
role.
The job_name is optional if only one export job
is associated with your schema and the job is active. To attach to a stopped
job, you must supply the job name. To see a list of Data Pump job names, you
can query the DBA_DATAPUMP_JOBS view or the USER_DATAPUMP_JOBS view. When you are
attached to the job, Export displays a description of the job and then displays
the Export prompt.
When you specify the
ATTACH
parameter, you cannot specify any other
parameters except for the connection string (user
/password
). You cannot attach to a job in another schema
unless it is already running. If the dump file set or master table for the job
have been deleted, the attach operation will fail. Altering the master table in
any way will lead to unpredictable results.> expdp hr/hr ATTACH=hr.export_job
COMPRESSION:
Default:
METADATA_ONLY.
Specifies
whether to compress metadata before writing to the dump file set.
compression={all |
data_only | metadata_only | none}
Following is an example:
$expdp full=yes userid="'/ as sysdba'" dumpfile=dbadir:full.compress.dmp
compression=all
METADATA_ONLY
results in all metadata being written to
the dump file in compressed format. This is the default.NONE
disables compression for the entire unload.
> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr_comp.dmp COMPRESSION=METADATA_ONLY
This command will execute a schema-mode export that will compress all
metadata before writing it out to the dump file,
hr_comp.dmp
. It defaults to a schema-mode export because no export mode is specified.
CONTENT:
Default:
ALL.
Enables you to filter what Export unloads: data only, metadata only,
or both.CONTENT={ALL | DATA_ONLY | METADATA_ONLY}
ALL
unloads both data and metadata. This is the default.DATA_ONLY
unloads only table row data; no database object definitions are unloaded.METADATA_ONLY
unloads only database object definitions; no table row data is unloaded.
The
CONTENT=METADATA_ONLY
parameter cannot be used in conjunction
with the parameter TRANSPORT_TABLESPACES
(transportable-tablespace-mode).> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp CONTENT=METADATA_ONLY
This command will
execute a schema-mode export that will unload only the metadata associated with
the
hr
schema. It defaults to a schema-mode export of
the hr
schema because no export mode is specified.
DIRECTORY: Default:
DATA_PUMP_DIR.
Specifies the default location to which Export can
write the dump file set and the log file.DIRECTORY=directory_object
The directory_object is the name of a database directory object (not the name of an actual directory) . Upon installation, privileged users
have access to a default directory object named
DATA_PUMP_DIR
. Users with access to DATA_PUMP_DIR
need not use the DIRECTORY
parameter at all. A directory object specified on the
DUMPFILE
or LOGFILE
parameter
overrides any directory object that you specify for the DIRECTORY
parameter.> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=employees.dmp CONTENT=METADATA_ONLY
DUMPFILE: Default:
expdat
.dmp.
Specifies the names, and
optionally, the directory objects of dump files for an export job.DUMPFILE=[directory_object:]file_name [, ...]
The directory_object is optional if one has already been
established by the
DIRECTORY
parameter. If you supply a value here, it
must be a directory object that already exists and that you have access to. A
database directory object that is specified as part of the DUMPFILE
parameter overrides a value specified by the DIRECTORY
parameter or by the default directory object.
You can supply multiple file_name specifications as a comma-delimited list
or in separate
DUMPFILE
parameter specifications. If no extension
is given for the filename, then Export uses the default file extension of .dmp
. The filenames can contain a substitution
variable (%U
), which implies that multiple files may be
generated. The substitution variable is expanded in the resulting filenames
into a 2-digit, fixed-width, incrementing integer starting at 01 and ending at
99. If a file specification contains two substitution variables, both are
incremented at the same time. For example, exp%Uaa%U.dmp
would resolve to exp01aa01.dmp
, exp02aa02.dmp
, and
so forth.
If the
FILESIZE
parameter is specified, each dump file will have
a maximum of that size in bytes and be nonextensible. If more space is required
for the dump file set and a template with a substitution variable (%U
) was supplied, a new dump file is automatically
created of the size specified by FILESIZE,
if there
is room on the device.
As each file specification or file template containing a substitution
variable is defined, it is instantiated into one fully qualified filename and
Export attempts to create it. The file specifications are processed in the
order in which they are specified. If the job needs extra files because the
maximum file size is reached, or to keep parallel workers active, then
additional files are created if file templates with substitution variables were
specified.
If there are
preexisting files that match the resulting filenames, an error is generated.
The existing dump files will not be overwritten.
> expdp hr/hr SCHEMAS=hr DIRECTORY=dpump_dir1 DUMPFILE=dpump_dir2:exp1.dmp,exp2%U.dmp PARALLEL=3
The dump file,
exp1
.dmp,
will be written to the path associated with the directory object dpump_dir2
because dpump_dir2
was specified as part of the dump file name, and therefore overrides the
directory object specified with the DIRECTORY
parameter. Because all three parallel processes will be given work to
perform during this job, the exp201.dmp
and exp202.dmp
dump files will be created and they will be written to the path associated
with the directory object, dpump_dir1
, that was specified with the DIRECTORY
parameter.
In Oracle 11g
data pump export utility, the new parameter REUSE_DUMPFILES enables you to
overwrite a preexisting dump file. Following is an example:
expdp hr
DIRECTORY=dp_dir DUMPFILE=hr.dmp TABLES=employees REUSE_DUMPFILES=y
ENCRYPTION_PASSWORD: Default: none. Specifies
a key for encrypting encrypted column data in the export dumpfile.
ENCRYPTION_PASSWORD = password
The password value that is supplied specifies a key for
re-encrypting encrypted table columns so that they are not written as clear
text in the dump file set. If the export operation involves encrypted table
columns, but an encryption password is not supplied, then the encrypted columns
will be written to the dump file set as clear text and a warning will be issued.
To use the
ENCRYPTION_PASSWORD
parameter, you must have Transparent Data
Encryption set up.
Note: There is no connection or dependency between the key specified with the
Data Pump
ENCRYPTION_PASSWORD
parameter and the key specified with the ENCRYPT
keyword when the table with encrypted columns was initially created. For
example, suppose a table is created as follows, with an encrypted column whose
key is xyz
: CREATE TABLE emp (col1 VARCHAR2(256) ENCRYPT IDENTIFIED BY "xyz");
When you export
the
emp
table, you can supply any arbitrary value for ENCRYPTION _PASSWORD
. It does not have to be xyz
.
The
ENCRYPTION_PASSWORD
parameter applies only to columns that
already have encrypted data. Data Pump neither provides nor supports encryption
of entire dump files.
For network exports, the
ENCRYPTION_PASSWORD
parameter is not supported with
user-defined external tables that have encrypted columns. The table will be
skipped and an error message will be displayed, but the job will continue.
Encryption attributes for all columns must match between the exported table
definition and the target table. For example, suppose you have a table,
EMP
, and one of its columns is named EMPNO
. Both of the following situations would result in
an error because the encryption attribute for the EMP
column in the source table would not match the encryption attribute for
the EMP
column in the target table:- The
EMP
table is exported with theEMPNO
column being encrypted, but prior to importing the table you remove the encryption attribute from theEMPNO
column. - The
EMP
table is exported without theEMPNO
column being encrypted, but prior to importing the table you enable encryption on theEMPNO
column.
expdp hr/hr TABLES=employee_s_encrypt DIRECTORY=dpump_dir DUMPFILE=dpcd2be1.dmp ENCRYPTION_PASSWORD=123456
To secure the
exported dump file, the following new
parameters are presented in Oracle 11g Data pump:
ENCRYPTION,
ENCRYPTION_PASSWORD and ENCRYPTION_ALGORITHM. To enable encryption, you must
specify either the ENCRYPTION
or ENCRYPTION_PASSWORD parameter,
or both. Those parameters are valid only in the Enterprise Edition of Oracle
Database 11g.
ENCRYPTION Parameter
This parameter
specifies whether or not to encrypt data before writing it to the dump file
set. The default value depends upon the combination of encryption-related
parameters that are used. If only the ENCRYPTION_PASSWORD parameter is specified, then the ENCRYPTION parameter defaults to ALL. If neither ENCRYPTION nor ENCRYPTION_PASSWORD is specified, then ENCRYPTION defaults to
NONE.
The ENCRYPTION
parameter has the following options:
ENCRYPTION = {all
| data_only | encrypted_columns_only | metadata_only | none}
Following is an
example:
expdp hr
DUMPFILE=dp_dir.hr_enc.dmp JOB_NAME=enc ENCRYPTION=data_only
ENCRYPTION_PASSWORD=mypassword
ENCRYPTION_ALGORITHM Parameter
This parameter
specifies which cryptographic algorithm should be used to perform the
encryption. Following is its syntax:
ENCRYPTION_ALGORITHM
= { AES128 | AES192 | AES256 }
The ENCRYPTION_ALGORITHM parameter requires that you
also specify either the ENCRYPTION or ENCRYPTION_PASSWORD
parameter.
Following is an example:
expdp hr
DIRECTORY=dp_dir DUMPFILE=hr_enc.dmp /
ENCRYPTION_PASSWORD=mypassword
ENCRYPTION_ALGORITHM=AES128
ENCRYPTION_MODE Parameter
This parameter
works the same way the encryption mode was operating in RMAN in Oracle 10g. It
specifies the type of security to use when encryption and decryption are
performed. It syntax is as follows
ENCRYPTION_MODE =
{ DUAL | PASSWORD | TRANSPARENT }
PASSWORD mode
requires that you provide a password when creating encrypted dump file sets.
TRANSPARENT mode
allows an encrypted dump file set to be created without any intervention from a
database
administrator
(DBA), provided the required Oracle Encryption Wallet is available.
DUAL mode creates
a dump file set that can later be imported either transparently or by
specifying a password that was used when the dual-mode encrypted dump file set
was created.
Following is an
example:
expdp hr
DIRECTORY=dp_dir DUMPFILE=hr_enc.dmp
ENCRYPTION=all
ENCRYPTION_PASSWORD=mypassword
ENCRYPTION_ALGORITHM=AES256
ENCRYPTION_MODE=dual
ESTIMATE
: Default:
BLOCKS.
Specifies the method that
Export will use to estimate how much disk space each
table in the export job will consume (in bytes). The estimate is printed in the
log file and displayed on the client's standard output device. The estimate is
for table row data only; it does not include metadata.ESTIMATE={BLOCKS | STATISTICS}
BLOCKS
- The estimate is calculated by multiplying the number of database blocks used by the source objects, times the appropriate block sizes.STATISTICS
- The estimate is calculated using statistics for each table. For this method to be as accurate as possible, all tables should have been analyzed recently.
If the Data Pump export job involves compressed tables, the default size
estimation given for the compressed table is inaccurate when
ESTIMATE=BLOCKS
is used. This is because the size estimate
does not reflect that the data was stored in a compressed form. To get a more
accurate size estimate for compressed tables, use ESTIMATE=STATISTICS
.> expdp hr/hr TABLES=employees ESTIMATE=STATISTICS DIRECTORY=dpump_dir1 DUMPFILE=estimate_stat.dmp
ESTIMATE_ONLY: Default:
n.
Instructs Export to estimate the space that a job
would consume, without actually performing the export operation.ESTIMATE_ONLY={y | n}
If
ESTIMATE_ONLY
=y
, then Export estimates
the space that would be consumed, but quits without actually performing the
export operation.> expdp hr/hr ESTIMATE_ONLY=y NOLOGFILE=y
EXCLUDE:
Default: none. Enables you to filter the metadata that is exported by specifying objects and
object types that you want excluded from the export
operation.
EXCLUDE=object_type[:name_clause] [, ...]
All object types for the given mode of export will be included except those
specified in an
EXCLUDE
statement. If an object is excluded, all
of its dependent objects are also excluded. For example, excluding a table will
also exclude all indexes and triggers on the table.
The name_clause is optional. It allows selection of
specific objects within an object type. It is a SQL expression used as a filter
on the type's object names. It consists of a SQL operator and the values
against which the object names of the specified type are to be compared. The
name clause applies only to object types whose instances have names (for
example, it is applicable to
TABLE
, but not to GRANT
). The name clause must be separated from the object type with a colon and
enclosed in double quotation marks, because single-quotation marks are required
to delimit the name strings. For example, you could set EXCLUDE=INDEX:"LIKE 'EMP%'"
to exclude all indexes whose names start
with emp
.
If no name_clause is provided, all objects of the specified type are excluded. More than one
EXCLUDE
statement can be specified.
Oracle recommends that you place
EXCLUDE
clauses in a parameter file to avoid having to use escape characters on
the command line. If the object_type you specify is CONSTRAINT
, GRANT
, or USER
, you should be aware of the effects this will
have, as described in the following paragraphs.
Excluding Constraints
NOT
NULL
constraints- Constraints
needed for the table to be created and loaded successfully; for example,
primary key constraints for index-organized tables, or
REF
SCOPE
andWITH
ROWID
constraints for tables withREF
columns.
This means that
the following
EXCLUDE
statements will be interpreted as
follows:EXCLUDE
=CONSTRAINT
will exclude all (nonreferential) constraints, except forNOT
NULL
constraints and any constraints needed for successful table creation and loading.EXCLUDE
=REF_CONSTRAINT
will exclude referential integrity (foreign key) constraints.
Specifying
EXCLUDE
=GRANT
excludes object grants on all object
types and system privilege grants. Specifying EXCLUDE
=USER
excludes only the definitions of users, not the
objects contained within users' schemas.EXCLUDE=SCHEMA:"='HR'"
If you try to exclude a user by using a statement such as
EXCLUDE=USER:"='HR'"
, then only the information used in CREATE USER hr
DDL statements will be excluded, and you may not
get the results you expect. The EXCLUDE and INCLUDE
parameters are mutually exclusive.> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr_exclude.dmp EXCLUDE=VIEW,PACKAGE, FUNCTION
FILESIZE:
Default:
0
(unlimited).
Specifies the maximum size of each dump file. If
the size is reached for any member of the dump file set, that file is closed
and an attempt is made to create a new file, if the file specification contains
a substitution variable.FILESIZE=integer[B | K | M | G]
The integer can be followed by
B
, K
, M
, or G
(indicating bytes, kilobytes, megabytes, and
gigabytes respectively). Bytes is the default. The actual size of the resulting
file may be rounded down slightly to match the size of the internal blocks used
in dump files.
The minimum size
for a file is ten times the default Data Pump block size, which is 4 kilobytes.
> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr_3m.dmp FILESIZE=3M
If three megabytes had not been sufficient to hold all the exported data,
then the following error would have been displayed and the job would have
stopped:
ORA-39095: Dump file space has been exhausted: Unable to allocate 217088 bytes
The actual number of bytes that could not be allocated may vary. Also, this
number does not represent the amount of space needed to complete the entire
export operation. It indicates only the size of the current object that was
being exported when the job ran out of dump file space.This situation can be
corrected by first attaching to the stopped job, adding one or more files using
the
ADD_FILE
command, and then restarting the operation.
FLASHBACK_SCN: Default: none. Specifies the system change number (SCN) that Export will use to
enable the Flashback Query utility.
FLASHBACK_SCN=scn_value
The export operation is performed with data that is consistent as of the
specified SCN. If the
NETWORK_LINK
parameter is specified, the SCN refers to
the SCN of the source database. FLASHBACK_SCN
and FLASHBACK_TIME
are mutually exclusive.> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr_scn.dmp FLASHBACK_SCN=384632
FLASHBACK_TIME: Default: none. The SCN that
most closely matches the specified time is found, and this SCN is used to
enable the Flashback utility. The export operation is performed with data that
is consistent as of this SCN.
FLASHBACK_TIME="TO_TIMESTAMP(time-value)"
Because the
TO_TIMESTAMP
value is enclosed in quotation marks, it
would be best to put this parameter in a parameter file. Otherwise, you might
need to use escape characters on the command line in front of the quotation
marks. FLASHBACK_TIME
and FLASHBACK_SCN
are
mutually exclusive.>expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr_time.dmp
FLASHBACK_TIME="TO_TIMESTAMP('25-08-2003 14:35:00', 'DD-MM-YYYY HH24:MI:SS')"
FULL={y | n}
FULL=y
indicates that all data and metadata are
to be exported.- The
following system schemas are not exported as part of a Full export because
the metadata they contain is exported as part of other objects in the dump
file set:
SYS
,ORDSYS
,EXFSYS
,MDSYS
,DMSYS
,CTXSYS
,ORDPLUGINS
,LBACSYS
,XDB
,SI_INFORMTN_SCHEMA
,DIP
,DBSNMP
, andWMSYS
. - Grants on objects owned by the
SYS
schema are never exported.
> expdp hr/hr DIRECTORY=dpump_dir2 DUMPFILE=expfull.dmp FULL=y NOLOGFILE=y
HELP = {y | n}
> expdp HELP = y
INCLUDE: Default: none. Enables you to filter the metadata that is
exported by specifying objects and object types for the current export mode.
The specified objects and all their dependent objects are exported. Grants on
these objects are also exported.
INCLUDE = object_type[:name_clause] [, ...]
Only object types explicitly specified in
INCLUDE
statements, and their dependent objects, are exported. No other object
types, including the schema definition information that is normally part of a
schema-mode export when you have the EXP_FULL_DATABASE
role, are exported.
To see a list of
valid object type path names for use with the INCLUDE parameter, you can query the following views:
DATABASE_EXPORT_OBJECTS
, SCHEMA_EXPORT_OBJECTS
, and TABLE_EXPORT_OBJECTS
.SCHEMAS=HR
DUMPFILE=expinclude.dmp
DIRECTORY=dpump_dir1
LOGFILE=expinclude.log
INCLUDE=TABLE:"IN ('EMPLOYEES', 'DEPARTMENTS')"
INCLUDE=PROCEDURE
INCLUDE=INDEX:"LIKE 'EMP%'"
You could then
use the
hr.par
file to start an export operation, without having
to enter any other parameters on the command line:> expdp hr/hr parfile=hr.par
> expdp hr/hr INCLUDE=TABLE DUMPFILE=dpump_dir1:exp_inc.dmp NOLOGFILE=y
Used to identify the export job in subsequent actions, such as when the
ATTACH
parameter is used to attach to a job, or to
identify the job using the DBA_DATAPUMP_JOBS
or USER_DATAPUMP_JOBS
views. The job name becomes the name of the master
table in the current user's schema. The master table is used to control the
export job.JOB_NAME=jobname_string
The jobname_string specifies a name of up to 30 bytes for this export job.
The bytes must represent printable characters and spaces. If spaces are
included, the name must be enclosed in single quotation marks (for example,
'Thursday Export'). The job name is implicitly qualified by the schema of the
user performing the export operation.
The default job
name is system-generated in the form
SYS_EXPORT_<mode>_NN
, where NN
expands to a 2-digit incrementing integer starting at 01. An example of a
default name is 'SYS_EXPORT_TABLESPACE_02'
.> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=exp_job.dmp JOB_NAME=exp_job NOLOGFILE=y
LOGFILE: Default: export
.log.
Specifies the name, and
optionally, a directory, for the log file of the export job.
LOGFILE=[directory_object:]file_name
All messages
regarding work in progress, work completed, and errors encountered are written
to the log file. (For a real-time status of the job, use the
STATUS
command in interactive mode.). A log file is
always created for an export job unless the NOLOGFILE
parameter is specified. As with the dump file set, the log file is
relative to the server and not the client. An existing file matching the
filename will be overwritten.> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp LOGFILE=hr_export.log
Note: Data Pump Export writes the log file using the database character set. If
your client
NLS_LANG
environment setting sets up a different client character set from the database
character set, then it is possible that table names may be different in the log
file than they are when displayed on the client output screen.
NETWORK_LINK: Default:
none. Enables an export from a (source)
database identified by a valid database link. The data from the source database
instance is written to a dump file set on the connected database instance.
NETWORK_LINK=source_database_link
The
NETWORK_LINK
parameter initiates an export using a database
link. This means that the system to which the expdp
client is connected contacts the source database referenced by the source_database_link, retrieves data from it, and writes the data to a
dump file set back on the connected system.
The source_database_link provided must be the name of a database
link to an available database. If the database on that instance does not
already have a database link, you or your DBA must create one. If the source
database is read-only, then the user on the source database must have a locally
managed tablespace assigned as the default temporary tablespace. Otherwise, the
job will fail.
When the
NETWORK_LINK
parameter is used in conjunction with the
TABLES
parameter, only whole tables can be exported (not
partitions of tables). The only types of database links supported by Data Pump
Export are: public, fixed-user, and connected-user. Current-user database links
are not supported.> expdp hr/hr DIRECTORY=dpump_dir1 NETWORK_LINK=source_database_link
DUMPFILE=network_export.dmp LOGFILE=network_export.log
NOLOGFILE={y | n}
Specify
NOLOGFILE
=y
to suppress the default behavior of creating a log
file. Progress and error information is still written to the standard output
device of any attached clients, including the client that started the original
export operation. If there are no clients attached to a running job and you
specify NOLOGFILE=y
, you run the risk of losing important progress
and error information.> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp NOLOGFILE=y
PARFILE=[directory_path]file_name
Unlike dump and log files, which are created and written by the Oracle
database, the parameter file is opened and read by the client running the expdp
image. Therefore, a directory object name is neither required nor appropriate.
The directory path is an operating system-specific directory specification. The
default is the user's current directory. The
PARFILE
parameter cannot be specified within a parameter file.SCHEMAS=HR DUMPFILE=exp.dmp DIRECTORY=dpump_dir1
LOGFILE=exp.log
You could then
issue the following Export command to specify the parameter file:
> expdp hr/hr parfile=hr.par
QUERY:
Default: none. Enables you to filter
the data that is exported by specifying a clause for a SQL
SELECT
statement, which is applied to all tables in the
export job or to a specific table.QUERY = [schema.][table_name:] query_clause
The query_clause is typically a
WHERE
clause for fine-grained row selection, but could
be any SQL clause. For example, an ORDER
BY
clause could be used to speed up a migration from
a heap-organized table to an index-organized table. If a [schema.] table_name is not supplied, the query is applied to (and
must be valid for) all tables in the export job. A table-specific query
overrides a query applied to all tables.
When the query is to be applied to a specific table, a colon must separate
the table name from the query clause. More than one table-specific query can be
specified, but only one can be specified per table. Oracle highly recommends
that you place
QUERY
specifications in a parameter file;
otherwise, you might have to use operating system-specific escape characters on
the command line before each quotation mark. The query must be enclosed in
single or double quotation marks.
The
QUERY
parameter cannot be used in conjunction with the
following parameters:CONTENT=METADATA_ONLY
ESTIMATE_ONLY
TRANSPORT_TABLESPACES
> expdp hr/hr parfile=emp_query.par
The contents of
the emp_query.par file are as follows:
QUERY=employees:'"WHERE department_id > 10 AND salary > 10000"' NOLOGFILE=y DIRECTORY=dpump_dir1 DUMPFILE=exp1.dmp
SAMPLE: Default: None. Allows you to specify a percentage of data to
be sampled and unloaded from the source database.
SAMPLE=[[schema_name.]table_name:]sample_percent
This parameter allows you to export subsets of data by specifying the
percentage of data to be sampled and exported. The
sample_percent
indicates the probability that a block of rows
will be selected as part of the sample. It does not mean that the database will
retrieve exactly that amount of rows from the table. The value you supply for sample_percent
can be anywhere from .000001 up to, but not
including, 100.
The
sample_percent can be applied to specific tables. In the following example, 50%
of the
HR.EMPLOYEES
table will be exported:SAMPLE="HR"."EMPLOYEES":50
If you specify a
schema, you must also specify a table. However, you can specify a table without
specifying a schema; the current user will be assumed. If no table is
specified, then the
sample_percent
value applies to the entire export job.
Note that you can use this parameter in conjunction with the Data Pump
Import PCTSPACE transform, so that the size of storage allocations matches the
sampled data subset.
The
SAMPLE
parameter is not valid for network exports.> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=sample.dmp SAMPLE=70
SCHEMAS: Default: current user's schema.
S
pecifies that you
want to perform a schema-mode export. This is the default
mode for Export.SCHEMAS=schema_name [, ...]
> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=hr,sh,oe
STATUS: Default:0.
Specifies the frequency at which the job status display is updated.
STATUS=[integer]
If you supply a value for integer, it specifies how frequently, in seconds,
job status should be displayed in logging mode. If no value is entered or if
the default value of 0 is used, no additional information is displayed beyond
information about the completion of each object type, table, or partition. This
status information is written only to your standard output device, not to the
log file (if one is in effect).
> expdp hr/hr DIRECTORY=dpump_dir1 SCHEMAS=hr,sh STATUS=300
TABLES=[schema_name.]table_name[:partition_name] [, ...]
Filtering can restrict what is exported using this mode. You can filter the
data and metadata that is exported, by specifying a comma-delimited list of
tables and partitions or subpartitions. If a partition name is specified, it
must be the name of a partition or subpartition in the associated table. Only
the specified set of tables, partitions, and their dependent objects are
unloaded.
The table name that you specify can be preceded by a qualifying schema
name. All table names specified must reside in the same schema. The schema
defaults to that of the current user.The use of wildcards is supported for one
table name per export operation. For example,
TABLES=emp%
would export all tables having names that start with 'EMP
'.
Cross-schema references are not exported. For example, a trigger defined on
a table within one of the specified schemas, but that resides in a schema not
explicitly specified, is not exported. Types used by the table are not exported in table
mode. This means that if you subsequently import the dump file and the TYPE
does not already exist in the destination database, the table creation will
fail. The use of synonyms as values for the
TABLES
parameter is not supported. For example, if the regions
table in the hr
schema had a synonym of regn
, it would not be valid to use TABLES=regn
. An error would be returned.
The export of
table partitions is not supported when the
NETWORK_LINK
parameter is used. The export of tables that include wildcards in the
table name is not supported if the table has partitions.> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tables.dmp TABLES=employees,jobs,departments
The following
example shows the use of the
TABLES
parameter to export partitions:> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tables_part.dmp TABLES=sh.sales:sales_Q1_2000,sh.sales:sales_Q2_2000
TABLESPACES=tablespace_name [, ...]
In tablespace mode, only the tables contained in a specified set of
tablespaces are unloaded. If a table is unloaded, its dependent objects are
also unloaded. If any part of a table resides in the specified set, then that
table and all of its dependent objects are exported.
> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tbs.dmp
TABLESPACES=tbs_4, tbs_5, tbs_6
TRANSPORT_FULL_CHECK: Default:
n.
Specifies whether or not to check for dependencies between those objects
inside the transportable set and those outside the transportable set. This
parameter is applicable only to a transportable-tablespace mode export.TRANSPORT_FULL_CHECK={y | n}
If
TRANSPORT_FULL_CHECK
=y
, then Export verifies that there are no dependencies between those objects
inside the transportable set and those outside the transportable set. The check
addresses two-way dependencies. For example, if a table is inside the
transportable set but its index is not, a failure is returned and the export
operation is terminated. Similarly, a failure is also returned if an index is
in the transportable set but the table is not.
If
TRANSPORT_FULL_CHECK
=n,
then Export verifies only that there are no objects within the
transportable set that are dependent on objects outside the transportable set.
This check addresses a one-way dependency. For example, a table is not
dependent on an index, but an index is dependent on a table, because an index without a table
has no meaning. Therefore, if the transportable set contains a table, but not
its index, then this check succeeds. However, if the transportable set contains
an index, but not the table, the export operation is terminated.> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tts.dmp
TRANSPORT_TABLESPACES=tbs_1 TRANSPORT_FULL_CHECK=y LOGFILE=tts.log
TRANSPORT_TABLESPACES: Default: none. Specifies that
you want to perform a transportable-tablespace-mode export.
TRANSPORT_TABLESPACES=tablespace_name [, ...]
Use the
TRANSPORT_TABLESPACES
parameter to specify a list of tablespace names for which object metadata will be exported from the
source database into the target database.
Note: You cannot export
transportable tablespaces and then import them into a database at a lower
release level. The target database must be at the same or higher release level
as the source database. Transportable jobs are not restartable. Transportable
jobs are restricted to a degree of parallelism of 1.
> expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tts.dmp
TRANSPORT_TABLESPACES=tbs_1 TRANSPORT_FULL_CHECK=y LOGFILE=tts.log
VERSION:
Default:
COMPATIBLE.
Specifies the version of
database objects to be exported. This can be used to create a dump file set
that is compatible with a previous release of Oracle Database. Note that this does
not mean that Data
Pump Export can be used with versions of Oracle Database prior to 10.1. Data
Pump Export only works with Oracle Database 10g release 1 (10.1) or later. The VERSION
parameter simply allows you to identify the
version of the objects being exported.VERSION={COMPATIBLE | LATEST | version_string}
COMPATIBLE
- This is the default value. The version of the metadata corresponds to the database compatibility level. Database compatibility must be set to 9.2 or higher.LATEST
- The version of the metadata corresponds to the database version.- version_string - A specific database version (for example,
10.0.0). In Oracle Database 10g,
this value cannot be lower than 9.2.
> expdp hr/hr TABLES=hr.employees VERSION=LATEST DIRECTORY=dpump_dir1 DUMPFILE=emp.dmp NOLOGFILE=y
No comments:
Post a Comment
Share your knowledge it really improves, don't show off...