Commands Available in Export's Interactive-Command Mode
In interactive-command mode, the current job continues running, but logging to the terminal is suspended and the Export prompt (
Export>) is displayed.
To start interactive-command mode, do one of the following:
- From an attached client, press Ctrl+C.
- From a
terminal other than the one on which the job is running, specify the
ATTACHparameter in an
expdpcommand to attach to the job. This is a useful feature in situations in which you start a job at one location and need to check on it at a later time from a different location
The file_name must not contain any directory path information. However, it can include a substitution variable,
%U,which indicates that multiple files may be generated using the specified filename as a template. It can also specify another directory_object. The size of the file being added is determined by the setting of the
Export> ADD_FILE=hr2.dmp, dpump_dir2:hr3.dmp
In logging mode, status is continually output to the terminal. If the job is currently stopped, then
CONTINUE_CLIENTwill also cause the client to attempt to start the job.
EXIT_CLIENT : Stops the export client session, exits Export, and discontinues logging to the terminal, but leaves the current job running. Because
EXIT_CLIENTleaves the job running, you can attach to the job at a later time. To see the status of the job, you can monitor the log file for the job or you can query the
USER_DATAPUMP_JOBSview or the
KILL_JOB: Detaches all currently attached client sessions and then kills the current job. It exits Export and returns to the terminal prompt.
A job that is killed using
KILL_JOBcannot be restarted. All attached clients, including the one issuing the
KILL_JOBcommand, receive a warning that the job is being killed by the current user and are then detached. After all clients are detached, the job's process structure is immediately run down and the master table and dump files are deleted. Log files are not deleted.
START_JOBcommand restarts the current job to which you are attached (the job cannot be currently executing). The job is restarted with no data loss or corruption after an unexpected failure or after you issued a
STOP_JOBcommand, provided the dump file set and master table have not been altered in any way.
STATUS: Displays cumulative status of the job, along with a description of the current operation. An estimated completion percentage for the job is also returned. Also allows resetting the display interval for logging mode status.
You have the option of specifying how frequently, in seconds, this status should be displayed in logging mode. If no value is entered or if the default value of 0 is used, the periodic status display is turned off and status is displayed only once. This status information is written only to your standard output device, not to the log file (even if one is in effect).
If the master table and dump file set are not disturbed when or after the
STOP_JOBcommand is issued, the job can be attached to and restarted at a later time with the
START_JOBcommand. To perform an orderly shutdown, use
STOP_JOB(without any associated value). A warning requiring confirmation will be issued. An orderly shutdown stops the job after worker processes have finished their current tasks.
To perform an immediate shutdown, specify
IMMEDIATE. A warning requiring confirmation will be issued. All attached clients, including the one issuing the
STOP_JOBcommand, receive a warning that the job is being stopped by the current user and they will be detached. After all clients are detached, the process structure of the job is immediately run down. That is, the master process will not wait for the worker processes to finish their current tasks. There is no risk of corruption or data loss when you specify
STOP_JOB=IMMEDIATE. However, some tasks that were incomplete at the time of shutdown may have to be redone at restart time.
Data Pump technology fully uses all available resources to maximize throughput and minimize elapsed job time. For this to happen, a system must be well-balanced across CPU, memory, and I/O. In addition, standard performance tuning principles apply. For example, for maximum performance you should ensure that the files that are members of a dump file set reside on separate disks, because the dump files will be written and read in parallel. Also, the disks should not be the same ones on which the source or target tablespaces reside. Any performance tuning activity involves making trade-offs between performance and resource consumption.
Controlling Resource Consumption
The Data Pump Export and Import utilities enable you to dynamically increase and decrease resource consumption for each job. This is done using the
PARALLELparameter to specify a degree of parallelism for the job. As you increase the degree of parallelism, CPU usage, memory consumption, and I/O bandwidth usage also increase. You must ensure that adequate amounts of these resources are available. If necessary, you can distribute files across different disk devices or channels to get the needed I/O bandwidth.
The settings for certain initialization parameters can affect the performance of Data Pump Export and Import. In particular, you can try using the following settings to improve performance, although the effect may not be the same on all platforms.
The following initialization parameters must have values set high enough to allow for maximum parallelism:
UNDO_TABLESPACEinitialization parameters should be generously sized. The exact values will depend upon the size of your database.
If Streams functionality is used, but the
STREAMS_POOL_SIZEinitialization parameter is not defined, then the size of the streams pool automatically defaults to 10% of size of the shared pool. When the streams pool is created, the required SGA memory is taken from memory allocated to the buffer cache, reducing the size of the cache to less than what was specified by the
DB_CACHE_SIZEinitialization parameter. This means that if the buffer cache was configured with only the minimal required SGA, then Data Pump operations may not work properly.
The Data Pump API(Unit V)
How Does the Client Interface to the Data Pump API Work?
The main structure used in the client interface is a job handle, which appears to the caller as an integer. Handles are created using the
ATTACHfunction. Other sessions can attach to a job to monitor and control its progress. This allows a DBA to start up a job before departing from work and then watch the progress of the job from home. Handles are session specific. The same job can create different handles in different sessions.
- Undefined - before a handle is created
- Defining - when the handle is first created
- Executing - when the
START_JOBprocedure is executed
- Completing - when the job has finished its work and the Data Pump processes are ending
- Completed - when the job is completed
- Stop Pending - when an orderly job shutdown has been requested
- Stopping - when the job is stopping
- Stopped - when
STOP_JOBis performed against an executing job
- Idling - the period between the time
DBMS_DATAPUMP.ATTACHis executed to attach to a stopped job and the time that a
DBMS_DATAPUMP.START_JOBis executed to restart that job.
- Not Running - when a master table exists for a job that is not running (has no Data Pump processes associated with it)
START_JOBon a job in an Idling state will return it to an Executing state. If all users execute
DETACHto detach from a job in the Defining state, the job will be totally removed from the database.When a job abnormally terminates or when an instance running the job is shut down, the job is placed in the Not Running state if it was previously executing or idling. It can then be restarted by the user.The master control process is active in the Defining, Idling, Executing, Stopping, Stop Pending, and Completing states. It is also active briefly in the Stopped and Completed states. The master table for the job exists in all states except the Undefined state. Worker processes are only active in the Executing and Stop Pending states, and briefly in the Defining state for import jobs.
To use the Data Pump API, you use the procedures provided in the
DBMS_DATAPUMPpackage. The following steps list the basic activities involved in using the Data Pump API. The steps are presented in the order in which the activities would generally be performed:
- Execute the
OPENprocedure to create a Data Pump job and its infrastructure.
- Define any parameters for the job.
- Start the job.
- Optionally, monitor the job until it completes.
- Optionally, detach from the job and reattach at a later time.
- Optionally, stop the job.
- Optionally, restart the job, if desired.
There is a restriction on Expdp. It cannot export system schemas like SYS, ORDSYS, EXFSYS, MDSYS, DMSYS, CTXSYS, ORDPLUGINS, LBACSYS, XDB, SI_INFORMTN_SCHEMA, DIP, DBSNMP and WMSYS in any mode.