Wednesday, September 26, 2012

ORACLE 11G DATAPUMP INTERACTIVE MODE AND TUNING PARAMETERS AND RESTRICTIONS -PART IV


Commands Available in Export's Interactive-Command Mode

In interactive-command mode, the current job continues running, but logging to the terminal is suspended and the Export prompt (Export>) is displayed.

To start interactive-command mode, do one of the following:

  • From an attached client, press Ctrl+C.
  • From a terminal other than the one on which the job is running, specify the ATTACH parameter in an expdp command to attach to the job. This is a useful feature in situations in which you start a job at one location and need to check on it at a later time from a different location

 

ADD_FILE:  Adds additional files or wildcard file templates to the export dump file set.

ADD_FILE=[directory_object]file_name [,...]

The file_name must not contain any directory path information. However, it can include a substitution variable, %U, which indicates that multiple files may be generated using the specified filename as a template. It can also specify another directory_object. The size of the file being added is determined by the setting of the FILESIZE parameter.

Export> ADD_FILE=hr2.dmp, dpump_dir2:hr3.dmp

 

CONTINUE_CLIENT:  Changes the Export mode from interactive-command mode to logging mode.

In logging mode, status is continually output to the terminal. If the job is currently stopped, then CONTINUE_CLIENT will also cause the client to attempt to start the job.

Export> CONTINUE_CLIENT

 

EXIT_CLIENT : Stops the export client session, exits Export, and discontinues logging to the terminal, but leaves the current job running. Because EXIT_CLIENT leaves the job running, you can attach to the job at a later time. To see the status of the job, you can monitor the log file for the job or you can query the USER_DATAPUMP_JOBS view or the V$SESSION_LONGOPS view.

Export> EXIT_CLIENT

 

FILESIZE: Redefines the default size to be used for any subsequent dump files.

FILESIZE=number
Export> FILESIZE=100M

 

HELP:  Provides information about Data Pump Export commands available in interactive-command mode.

Export> HELP

 

KILL_JOB: Detaches all currently attached client sessions and then kills the current job. It exits Export and returns to the terminal prompt.

A job that is killed using KILL_JOB cannot be restarted. All attached clients, including the one issuing the KILL_JOB command, receive a warning that the job is being killed by the current user and are then detached. After all clients are detached, the job's process structure is immediately run down and the master table and dump files are deleted. Log files are not deleted.

Export> KILL_JOB

 
START_JOB : Starts the current job to which you are attached.
The START_JOB command restarts the current job to which you are attached (the job cannot be currently executing). The job is restarted with no data loss or corruption after an unexpected failure or after you issued a STOP_JOB command, provided the dump file set and master table have not been altered in any way.
Export> START_JOB
 
STATUS:  Displays cumulative status of the job, along with a description of the current operation. An estimated completion percentage for the job is also returned. Also allows resetting the display interval for logging mode status.
STATUS[=integer]
You have the option of specifying how frequently, in seconds, this status should be displayed in logging mode. If no value is entered or if the default value of 0 is used, the periodic status display is turned off and status is displayed only once. This status information is written only to your standard output device, not to the log file (even if one is in effect).
Export> STATUS=300
 
STOP_JOB:  Stops the current job either immediately or after an orderly shutdown, and exits Export.
STOP_JOB[=IMMEDIATE]
If the master table and dump file set are not disturbed when or after the STOP_JOB command is issued, the job can be attached to and restarted at a later time with the START_JOB command. To perform an orderly shutdown, use STOP_JOB (without any associated value). A warning requiring confirmation will be issued. An orderly shutdown stops the job after worker processes have finished their current tasks.
To perform an immediate shutdown, specify STOP_JOB=IMMEDIATE. A warning requiring confirmation will be issued. All attached clients, including the one issuing the STOP_JOB command, receive a warning that the job is being stopped by the current user and they will be detached. After all clients are detached, the process structure of the job is immediately run down. That is, the master process will not wait for the worker processes to finish their current tasks. There is no risk of corruption or data loss when you specify STOP_JOB=IMMEDIATE. However, some tasks that were incomplete at the time of shutdown may have to be redone at restart time.
Export> STOP_JOB=IMMEDIATE
 
Tuning Performance
Data Pump technology fully uses all available resources to maximize throughput and minimize elapsed job time. For this to happen, a system must be well-balanced across CPU, memory, and I/O. In addition, standard performance tuning principles apply. For example, for maximum performance you should ensure that the files that are members of a dump file set reside on separate disks, because the dump files will be written and read in parallel. Also, the disks should not be the same ones on which the source or target tablespaces reside. Any performance tuning activity involves making trade-offs between performance and resource consumption.
 
Controlling Resource Consumption
The Data Pump Export and Import utilities enable you to dynamically increase and decrease resource consumption for each job. This is done using the PARALLEL parameter to specify a degree of parallelism for the job.  As you increase the degree of parallelism, CPU usage, memory consumption, and I/O bandwidth usage also increase. You must ensure that adequate amounts of these resources are available. If necessary, you can distribute files across different disk devices or channels to get the needed I/O bandwidth.
Initialization Parameters That Affect Data Pump Performance
The settings for certain initialization parameters can affect the performance of Data Pump Export and Import. In particular, you can try using the following settings to improve performance, although the effect may not be the same on all platforms.
  • DISK_ASYNCH_IO=TRUE
  • DB_BLOCK_CHECKING=FALSE
  • DB_BLOCK_CHECKSUM=FALSE
The following initialization parameters must have values set high enough to allow for maximum parallelism:
  • PROCESSES
  • SESSIONS
  • PARALLEL_MAX_SERVERS
Additionally, the SHARED_POOL_SIZE and UNDO_TABLESPACE initialization parameters should be generously sized. The exact values will depend upon the size of your database.

Setting the Size Of the Buffer Cache In a Streams Environment

If Streams functionality is used, but the STREAMS_POOL_SIZE initialization parameter is not defined, then the size of the streams pool automatically defaults to 10% of size of the shared pool. When the streams pool is created, the required SGA memory is taken from memory allocated to the buffer cache, reducing the size of the cache to less than what was specified by the DB_CACHE_SIZE initialization parameter. This means that if the buffer cache was configured with only the minimal required SGA, then Data Pump operations may not work properly.
The Data Pump API(Unit V)
How Does the Client Interface to the Data Pump API Work?
The main structure used in the client interface is a job handle, which appears to the caller as an integer. Handles are created using the DBMS_DATAPUMP.OPEN or DBMS_ DATAPUMP.ATTACH function. Other sessions can attach to a job to monitor and control its progress. This allows a DBA to start up a job before departing from work and then watch the progress of the job from home. Handles are session specific. The same job can create different handles in different sessions.
Job States
There is a state associated with each phase of a job, as follows:
  • Undefined - before a handle is created
  • Defining - when the handle is first created
  • Executing - when the DBMS_DATAPUMP.START_JOB procedure is executed
  • Completing - when the job has finished its work and the Data Pump processes are ending
  • Completed - when the job is completed
  • Stop Pending - when an orderly job shutdown has been requested
  • Stopping - when the job is stopping
  • Stopped - when DBMS_DATAPUMP.STOP_JOB is performed against an executing job
  • Idling - the period between the time that a DBMS_DATAPUMP.ATTACH is executed to attach to a stopped job and the time that a DBMS_DATAPUMP.START_JOB is executed to restart that job.
  • Not Running - when a master table exists for a job that is not running (has no Data Pump processes associated with it)
Performing DBMS_DATAPUMP.START_JOB on a job in an Idling state will return it to an Executing state. If all users execute DBMS_DATAPUMP.DETACH to detach from a job in the Defining state, the job will be totally removed from the database.When a job abnormally terminates or when an instance running the job is shut down, the job is placed in the Not Running state if it was previously executing or idling. It can then be restarted by the user.The master control process is active in the Defining, Idling, Executing, Stopping, Stop Pending, and Completing states. It is also active briefly in the Stopped and Completed states. The master table for the job exists in all states except the Undefined state. Worker processes are only active in the Executing and Stop Pending states, and briefly in the Defining state for import jobs.
 
What Are the Basic Steps in Using the Data Pump API?
To use the Data Pump API, you use the procedures provided in the DBMS_DATAPUMP package. The following steps list the basic activities involved in using the Data Pump API. The steps are presented in the order in which the activities would generally be performed:
  • Execute the DBMS_DATAPUMP.OPEN procedure to create a Data Pump job and its infrastructure.
  • Define any parameters for the job.
  • Start the job.
  • Optionally, monitor the job until it completes.
  • Optionally, detach from the job and reattach at a later time.
  • Optionally, stop the job.
  • Optionally, restart the job, if desired.
 
 
There is a restriction on Expdp. It cannot export system schemas like SYS, ORDSYS, EXFSYS, MDSYS, DMSYS, CTXSYS, ORDPLUGINS, LBACSYS, XDB, SI_INFORMTN_SCHEMA, DIP, DBSNMP and WMSYS in any mode.

No comments:

Post a Comment

Share your knowledge it really improves, don't show off...