Wednesday, February 27, 2013

Cloning Oracle database Home from one server to another


How To Clone A  ORACLE Database Home From One Server To Other

Step1 : 
[oracle@lnx1 dbhome_1]$ opatch lsinv
Invoking OPatch 11.2.0.1.3
Oracle Interim Patch Installer version 11.2.0.1.3
Copyright (c) 2010, Oracle Corporation. All rights reserved.
Oracle Home : /oracle/app/product/11.2.0/dbhome_1
Central Inventory : /home/oracle/oraInventory
from : /etc/oraInst.loc
OPatch version : 11.2.0.1.3
OUI version : 11.2.0.1.0
OUI location : /oracle/app/product/11.2.0/dbhome_1/oui
Log file location : /oracle/app/product/11.2.0/dbhome_1/cfgtoollogs/opatch/opatch2010-11-05_08-52-30AM.log
Patch history file: /oracle/app/product/11.2.0/dbhome_1/cfgtoollogs/opatch/opatch_history.txt
Lsinventory Output file location : /oracle/app/product/11.2.0/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2010-
11-05_08-52-30AM.txt
--------------------------------------------------------------------------------
Installed Top-level Products (1):
Oracle Database 11g 11.2.0.1.0
There are 1 products installed in this Oracle Home.
Interim patches1) :
Patch 9952216 : applied on Fri Nov 05 07:13:34 PDT 2010
Unique Patch ID: 12825033
Created on 13 Sep 2010, 04:55:09 hrs PST8PDT
Bugs fixed:
9068088, 9363384, 8865718, 8898852, 8801119, 9054253, 8725286, 8974548
9093300, 8909984, 8755082, 8780372, 9952216, 8664189, 8769569, 7519406
9471411, 9302343, 8822531, 7705591, 8650719, 9637033, 8883722, 8639114
8723477, 8729793, 8919682, 8856478, 9001453, 8733749, 8565708, 8735201
8684517, 8870559, 8773383, 8981059, 8812705, 9488887, 8813366, 9242411
8822832, 8897784, 8760714, 8775569, 8671349, 8898589, 9714832, 8642202
9011088, 9369797, 9170608, 9165206, 8834636, 8891037, 8431487, 8570322
8685253, 8872096, 8718952, 8799099, 9032717, 9399090, 9713537, 9546223
8588519, 8783738, 8834425, 9454385, 8856497, 8890026, 8721315, 8818175
8674263, 9145541, 8720447, 9272086, 9467635, 9010222, 9102860, 9197917
8991997, 8661168, 8803762, 8769239, 9654983, 8546356, 8706590, 8778277
9058865, 8815639, 9971778, 9971779, 9027691, 9454036, 9454037, 9454038
9255542, 8761974, 9275072, 8496830, 8702892, 8818983, 8475069, 8875671
9328668, 8891929, 8798317, 8782959, 9971780, 8774868, 8820324, 8544696
8702535, 9406607, 9952260, 8268775, 9036013, 9363145, 8933870, 8405205
9467727, 8822365, 9676419, 8761260, 8790767, 8795418, 8913269, 8717461
8861700, 9531984, 8607693, 8330783, 8780281, 8784929, 8780711, 9341448
9015983, 8828328, 9119194, 8832205, 8665189, 8717031, 9482399, 9676420
9399991, 8821286, 8633358, 9321701, 9231605, 9655013, 8796511, 9167285
8782971, 8756598, 9390484, 8703064, 9066116, 9007102, 9461782, 9382101

8505803, 9352237, 8753903, 9216806, 8918433, 9057443, 8790561, 8795792
8733225, 9067282, 8928276, 8837736, 9210925
--------------------------------------------------------------------------------
OPatch succeeded.

Step 2:   Check the oracle home size

[oracle@lnx1 ~]$ cd /oracle/app/product/11.2.0/dbhome_1
[oracle@lnx1 dbhome_1]$
[oracle@lnx1 dbhome_1]$ du -chs

4.5G


step 3: Create the directory tree on the remote server and change ownership to oracle
[oracle@lnx2 ~]$ su -
Password:
[root@lnx2 ~]# mkdir -p /oracle/app/product/11.2.0/
[root@lnx2 ~]# chown -R oracle:oinstall  /oracle

step4: On source execute tar command to copy over the network the oracle_home directories to the target server
[oracle@lnx1 ~]$ cd /oracle/app/product/11.2.0
[oracle@lnx1 11.2.0]$ tar -cvf dbhome_1.tar dbhome_1
[oracle@lnx1 11.2.0]$ scp dbhome_1.tar oracle@linux2: /oracle/app/product/11.2.0/

go to new machine
[oracle@lnx2 11.2.0]$ tar -xvf dbhome_1.tar

 [oracle@lnx2 dbhome_1]$ du -chs


4.5G

step5: First I need to detach the inventory because the server was itself a clone of the source

[oracle@lnx2 ~]$ cd /oracle/app/product/11.2.0/dbhome_1/oui/bin
[oracle@lnx2 bin]$ ./runInstaller -detachHome
ORACLE_HOME="/oracle/app/product/11.2.0/dbhome_1"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 1992 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /home/oracle/oraInventory
'DetachHome' was successful.

step 6: Run the cloning procedure:
[oracle@lnx2 bin]$ ./runInstaller -clone -silent -ignorePreReq ORACLE_HOME="/oracle/app/product/11.2.0/dbhome_1" ORACLE_HOME_NAME="dbhome_11gR2"

ORACLE_BASE="/oracle/app" OSDBA_GROUP=dba OSOPER_GROUP=dba

Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 1992 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2010-11-05_11-28-15AM. Please
wait ...[oracle@lnx2 bin]$ Oracle Universal Installer, Version 11.2.0.1.0 Production
Copyright (C) 1999, 2009, Oracle. All rights reserved.
You can find the log of this install session at:
/home/oracle/oraInventory/logs/cloneActions2010-11-05_11-28-15AM.log
.................................................................................................... 100% Done.
Installation in progress (Friday, November 5, 2010 11:28:48 AM PDT)
.............................................................................. 78% Done.
Install successful
Linking in progress (Friday, November 5, 2010 11:29:21 AM PDT)
Link successful
Setup in progress (Friday, November 5, 2010 11:31:59 AM PDT)
Setup successful
End of install phases.(Friday, November 5, 2010 11:35:50 AM PDT)
Starting to execute configuration assistants
Configuration assistant "Oracle Configuration Manager Clone" succeeded
WARNING:
The following configuration scripts need to be executed as the "root" user.
/oracle/app/product/11.2.0/dbhome_1/root.sh
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts
The cloning of dbhome_11gR2 was successful.
Please check '/home/oracle/oraInventory/logs/cloneActions2010-11-05_11-28-15AM.log' for more details.


Step 7: The last step is to execute as root the root.sh script:
[root@lnx2 ~]# /oracle/app/product/11.2.0/dbhome_1/root.sh
Check /oracle/app/product/11.2.0/dbhome_1/install/root_lnx2_2010-11-05_11-39-06.log for the output of
root script

Post Check:

[oracle@lnx2 ~]$ opatch lsinv
Invoking OPatch 11.2.0.1.3
Oracle Interim Patch Installer version 11.2.0.1.3
Copyright (c) 2010, Oracle Corporation. All rights reserved.
Oracle Home : /oracle/app/product/11.2.0/dbhome_1
Central Inventory : /home/oracle/oraInventory
from : /etc/oraInst.loc
OPatch version : 11.2.0.1.3
OUI version : 11.2.0.1.0
OUI location : /oracle/app/product/11.2.0/dbhome_1/oui
Log file location : /oracle/app/product/11.2.0/dbhome_1/cfgtoollogs/opatch/opatch2010-11-05_11-46-19AM.log
Patch history file: /oracle/app/product/11.2.0/dbhome_1/cfgtoollogs/opatch/opatch_history.txt
Lsinventory Output file location : /oracle/app/product/11.2.0/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2010-
11-05_11-46-19AM.txt
--------------------------------------------------------------------------------
Installed Top-level Products (1):
Oracle Database 11g 11.2.0.1.0
There are 1 products installed in this Oracle Home.

........................
OPatch succeeded.




Oracle 11g AUTO SAMPLING STATISTICS


Optimizer statistics in Oracle are managed via a pl/sql package, dbms_stats. It provides several pl/sql procedures to gather statistics for a table, schema, or a database. For example, gather_table_statistics is used to gather statistics on a table. This procedure has an estimate_percent parameter, which specifies the sampling percentage of the statistics gathering. The users can specify any number between 0 ~ 100 for this parameter. For example, suppose you have a table BIGT, you can specify a 1% sampling percentage as follows:
exec dbms_stats.gather_table_stats(null, 'BIGT',
                       estimate_percent => 1);
It is not always easy for users to pick the appropriate sampling percentage. If the specified sampling percentage is too high, it can take longer to gather statistics. On the contray, if the data is skewed and the specified sampling percentage is too low, the resulting statistics can be inaccurate. For this reason, Oracle introduced the AUTO value for the estimate_percent parameter. For example, you can gather statistics on BIGT as follows:
exec dbms_stats.gather_table_stats(null, 'BIGT',
                       estimate_percent => dbms_stats.auto_sample_size);
The advantage of using AUTO sample size over a fixed number is two-folds. First, when AUTO sample size is specified, the system automatically determines the appropriate sampling percentage. Second, AUTO sample size is more flexible than a fixed sampling percentage. A fixed sampling percentage size that was good at some point in time may not be appropriate after the data distribution in the table has changed. On the other hand when AUTO value is used Oracle will adjust the sample size when the data distribution changes.
When AUTO is used Oracle picks a sample size where the statistics quality is good enough. However, it does not work very well under cases of extreme skew in the data. In Oracle 11g, we improved the behavior when the AUTO value is used. First, AUTO sampling now generates deterministic statistics. Second, and more importantly, AUTO sampling generates statistics that are almost as accurate as 100% sampling but takes much less time than 100% sampling. To illustrate these merits, we compare the performance of using a fixed sampling percentage, AUTO sample size in Oracle 10g and AUTO sample size in Oracle 11g. We used the standard TPC-D data generator to generate a Lineitem table. The Lineitem table is about 230G and contains 1.8 million rows with 16 columns. The schema of the lineitem table is as follows:
column name
column type
l_shipdate
date
l_orderkey
number
l_discount
number
l_extendedprice
number
l_suppkey
number
l_quantity
number
l_returnflag
varchar2
l_partkey
number
l_linestatus
varchar2
l_tax
number
l_commitdate
date
l_receiptdate
date
l_shipmode
varchar2
l_linenumber
number
l_shipinstruct
varchar2
l_comment
varchar2

Table 1 gives the elapsed time of gathering statistics on the Lineitem table by different sampling percentages.
Sampling Percentage
Elapsed Time (sec)
1% sampling
797
100% sampling (Compute)
18772
Auto sampling in Oracle 10g
2935
Auto sampling in Oracle 11g
1908

Table 1: Statistics gathering time on 230G TPC-D Lineitem Table Using Different Estimate Percentages We also compare the quality of the statistics gathered using different estimate percentages. Among all the statistics of a column, number of distinct values (NDV) is the one whose accuracy used to be an issue. We define the accuracy rate of NDV of a column as follows:
accuracy rate = 1 - (|estimated NDV - actual NDV|)/actual NDV.
The accuracy rate ranges from 0% to 100%. The higher the accuracy rate is, the more accurate the gathered statistics are. Since 100% sampling always lead to an accuracy rate of 100%, we do not report it. We focus on the columns which has at least one statistics accuracy rate below 99.9% when using different estimate percentages. Table 2 illustrates the accurate rates of the columns.
Column Name
Actual NDV
Auto Sampling in Oracle 11g
1% Sampling
orderkey
450,000,000
98.0%
50%
comment
181,122,127
98.60%
4.60%
partkey
60,000,000
99.20%
98.20%
suppkey
3,000,000
99.60%
99.90%
extendedprice
3,791,320
99.60%
94.30%

Table 2: Accuracy Rate of Gathering NDV LineItem Using Different Estimate Percentages In short, the elapsed time of ''AUTO sampling in Oracle 11g'' is 10 times faster than 100% sampling but yields the statistics of similar quality (accuracy rate close to 100%). 

DBMS_STATS PREFS


In previous Database releases you had to use the DBMS_STATS.SET_PARAM procedure to change the default value for the parameters used by the DBMS_STATS.GATHER_*_STATS procedures. The scope of any changes that were made was all subsequent operations. In Oracle Database 11g, the DBMS_STATS.SET_PARAM procedure has been deprecated and it has been replaced with a set of procedures that allow you to set a preference for each parameter at a table, schema, database, and Global level. These new procedures are called DBMS_STATS.SET_*_PREFS and offer a much finer granularity of control. However there has been some confusion around which procedure you should use when and what the hierarchy is among these procedures. In this post we hope to clear up the confusion. Lets start by looking at the list of parameters you can change using the DBMS_STAT.SET_*_PREFS procedures.
  • AUTOSTATS_TARGET (SET_GLOBAL_PREFS only)
  • CASCADE
  • DEGREE
  • ESTIMATE_PERCENT
  • METHOD_OPT
  • NO_INVALIDATE
  • GRANULARITY
  • PUBLISH
  • INCREMENTAL
  • STALE_PERCENT
As mentioned above there are four DBMS_STATS.SET_*_PREFS procedures.
  1. SET_TABLE_PREFS
  2. SET_SCHEMA_PREFS
  3. SET_DATABASE_PREFS
  4. SET_GLOBAL_PREFS
The DBMS_STATS.SET_TABLE_PREFS procedure allows you to change the default values of the parameters used by the DBMS_STATS.GATHER_*_STATS procedures for the specified table only.

The
DBMS_STATS.SET_SCHEMA_PREFS procedure allows you to change the default values of the parameters used by the DBMS_STATS.GATHER_*_STATS procedures for all of the existing objects in the specified schema. This procedure actually calls DBMS_STATS.SET_TABLE_PREFS for each of the tables in the specified schema. Since it uses DBMS_STATS.SET_TABLE_PREFS calling this procedure will not affect any new objects created after it has been run. New objects will pick up the GLOBAL_PREF values for all parameters.

The
DBMS_STATS.SET_DATABASE_PREFS procedure allows you to change the default values of the parameters used by the DBMS_STATS.GATHER_*_STATS procedures for all of the user defined schemas in the database. This procedure actually calls DBMS_STATS.SET_TABLE_PREFS for each of the tables in each of the user defined schemas. Since it uses DBMS_STATS.SET_TABLE_PREFS this procedure will not affect any new objects created after it has been run. New objects will pick up the GLOBAL_PREF values for all parameters. It is also possible to include the Oracle owned schemas (sys, system, etc) by setting the ADD_SYS parameter to TRUE.

The
DBMS_STATS.SET_GLOBAL_PREFS procedure allows you to change the default values of the parameters used by the DBMS_STATS.GATHER_*_STATS procedures for any object in the database that does not have an existing table preference. All parameters default to the global setting unless there is a table preference set or the parameter is explicitly set in the DBMS_STATS.GATHER_*_STATS command. Changes made by this procedure will affect any new objects created after it has been run as new objects will pick up the GLOBAL_PREF values for all parameters.

With
GLOBAL_PREFS it is also possible to set a default value for one additional parameter, called AUTOSTAT_TARGET. This additional parameter controls what objects the automatic statistic gathering job (that runs in the nightly maintenance window) will look after. The possible values for this parameter are ALL,ORACLE, and AUTO. ALL means the automatic statistics gathering job will gather statistics on all objects in the database. ORACLE means that the automatic statistics gathering job will only gather statistics for Oracle owned schemas (sys, sytem, etc) Finally AUTO (the default) means Oracle will decide what objects to gather statistics on. Currently AUTO and ALL behave the same.

In summary,
DBMS_STATS obeys the following hierarchy for parameter values, parameters values set in the DBMS_STAT.GATHER*_STATS command over rules everything. If the parameter has not been set in the command we check for a table level preference. If there is no table preference set we use the global preference. 

Concurrent Statistics


‎Gathering optimizer statistics is one of life's necessary evils even if it can take an extremely long time to complete. In this blog post, we discuss one remedy to improve the efficiency of statistics gathering.

Introduction

Oracle Database 11g Release 2 (11.2.0.2) introduces a new statistics gathering mode, 'concurrent statistics gathering'. The goal of this new mode is to enable a user to gather statistics on multiple tables in a schema (or database), and multiple (sub)partitions within a table concurrently. Gathering statistics on multiple tables and (sub)partitions concurrently can reduce the overall time it takes to gather statistics by allowing Oracle to fully utilize a multi-processor environment.

Concurrent statistics gathering is controlled by a global preference, CONCURRENT, in the DBMS_STATS package. The CONCURRENT preference is of type boolean, so it can be set to TRUE or FALSE. By default it is set to FALSE. When CONCURRENT is set to TRUE, Oracle employs Oracle Job Scheduler and Advanced Queuing components to create and manage multiple statistics gathering jobs concurrently.

If you call dbms_stats.gather_table_stats on a partitioned table when CONCURRENT is set to true, Oracle will create a separate statistics gathering job for each (sub)partition in the table. The Job Scheduler will decide how many of these jobs will execute concurrently, and how many will be queued based on available system resources. As the currently running jobs complete, more jobs will be dequeued and executed until all (sub)partitions have had their statistics gathered on them.

If you gather statistics using dbms_stats.gather_database_stats, dbms_stats.gather_schema_stats , or dbms_stats.gather_dictionary_stats, then Oracle will create a separate statistics gathering job for each non-partitioned table, and each (sub)partition for the partitioned tables. Each partitioned table will also have a coordinator job that manages its (sub)partition jobs. The database will then run as many concurrent jobs as possible, and queue the remaining jobs until the executing jobs complete. However, multiple partitioned tables are not allowed to be processed simultaneously to prevent possible deadlock cases. Hence, if there are some jobs running for a partitioned table, other partitioned tables in a schema (or database or dictionary) will be queued until the current one completes. There is no such restriction for non-partitioned tables. The maximum number of concurrent statistics gathering jobs is bounded by the job_queue_processes initialization parameter (per node on a RAC environment) and the available system resources.

The following figure illustrates the creation of jobs at different levels, where Table 3 is a partitioned table, while other tables are non-partitioned tables. Job 3 acts as a coordinator job for Table 3, and creates a job for each partition in that table, as well as a separate job for the global statistics of Table 3.



As another example, assume that the parameter job_queue_processes is set to 32, and you issued a dbms_stats.gather_schema_stats on the SH schema. Oracle would create a statistics gathering job (Level 1 in Figure 1) for each of the non-partitioned tables;

  • SUPPLEMENTARY_DEMOGRAPHICS
  • COUNTRIES
  • CUSTOMERS
  • PROMOTIONS
  • CHANNELS
  • PRODUCTS
  • TIMES
And, a coordinator job for each partitioned table, i.e., SALES and COSTS, in turn creates a statistics gathering job for each of partition in SALES and COSTS tables, respectively (Level 2 in Figure 1). Then, the Oracle Job Scheduler would allow 32 statistics gathering jobs to start, and would queue the rest (assuming that there are sufficient resources for 32 concurrent jobs). Suppose that 29 jobs (one for each partition + a coordinator job) for the SALES table get started, then three non-partitioned table statistics gathering jobs would also be started. The statistics gathering jobs for the COSTS table will be automatically queued, because only for one partitioned table can be processed at any one time. As each job finishes, another job will be dequeued and started, until all 63 jobs have been completed.
Configuration and Settings
In Oracle Database 11.2.0.2, the concurrency setting for statistics gathering is turned off by default. It can be turned on using the following command.
Begin
DBMS_STATS.SET_GLOBAL_PREFS('CONCURRENT','TRUE');
End;
/
You will also need some additional privileges above and beyond the regular privileges required to gather statistics. The user must have the following Job Scheduler and AQ privileges:
  • CREATE JOB
  • MANAGE SCHEDULER
  • MANAGE ANY QUEUE
The SYSAUX tablespace should be online, as the Job Scheduler stores its internal tables and views in SYSAUX tablespace.
Finally the job_queue_processes parameter should be set to at least 4. If you want to fully utilize all of the system resources during the statistics gathering process but you don't plan to use parallel execution you should set the job_queue_processes to 2* total number of CPU cores (this is a per node parameter in a RAC environment).Please make sure that you set this parameter system-wise (i.e., ALTER SYSTEM ... or in init.ora file) rather than at the session level (i.e., ALTER SESSION).

Using Concurrent Statistics Gathering with Parallel Execution

When using concurrent statistics gathering it is still possible to have each individual statistics gather job execute in parallel. This combination is normally used when the objects (tables or (sub)partitions) being analyzed are large. If you plan to execute the concurrent statistics gathering jobs in parallel you should disable the parallel adaptive multi-user initialization parameter. That is;
Alter system set parallel_adaptive_multi_user=false;

It is also recommended that you enable parallel statement queuing. This requires Resource Manager to be activated (if not already), and the creation of a temporary resource plan where the consumer group "OTHER_GROUPS" should have queuing enabled. By default, Resource Manager is activated only during the maintenance windows. The following script illustrates one way of creating a temporary resource plan (pqq_test), and enabling the Resource Manager with this plan.
-- connect as a user with dba privileges
begin
dbms_resource_manager.create_pending_area();
dbms_resource_manager.create_plan('pqq_test', 'pqq_test');
dbms_resource_manager.create_plan_directive(
'pqq_test',
'OTHER_GROUPS',
'OTHER_GROUPS directive for pqq',
parallel_target_percentage => 90);
dbms_resource_manager.submit_pending_area();
end;
/
ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'pqq_test' SID='*';
Monitoring Concurrent Statistics Gathering Jobs

You can use the standard database scheduler views to monitor the concurrent statistics gathering jobs. The comments field of a job in dba_scheduler_jobs shows the target object for that statistics gathering job in the form of owner.table.(sub)partition. All statistics collection job names start with 'ST$' for easy identification. The jobs whose name start with ST$T###_### are created by a coordinator job for a partitioned table, and works on a partition or subpartition of the table.

The jobs whose name begin with ST$SD###_### are created for a table in a schema or database, and either works as a coordinator for its partition level jobs (if the table is partitioned), or directly performs the statistics collection for the table (if the table is not partitioned).

Finally, those with ST$D###_### in their naming are created for dictionary level tasks (when gather_dictionary_stats is used), and jobs does similar tasks as those with SD in their names.

Using the following query you can see all of the concurrent statistics gathering jobs that have been created:

select job_name, state, comments
from dba_scheduler_jobs
where job_class like 'CONC%'

To only see the currently running jobs, filter by the job state:
select job_name, state, comments
from dba_scheduler_jobs
where job_class like 'CONC%'
and state = 'RUNNING';

Similarly, to see the scheduled (i.e., waiting to run) jobs you just need to change the state:
select job_name, state, comments
from dba_scheduler_jobs
where job_class like 'CONC%'
and state = 'SCHEDULED';

It is also possible to see the elapse time for the currently running statistics gathering jobs:
select job_name, elapsed_time
from dba_scheduler_running_jobs
where job_name like 'ST$%';

Known Limitations

Concurrent statistics gathering does not use any automated way to determine the maximum number of jobs that can concurrently run based on the system load and resources. Concurrent statistics gathering solely relies on the Job Scheduler and the value of the job_queue_processes parameter. Session-wide parameter settings and events are not  transferred to the newly created jobs (scheduler limitation). Indexes are not gathered concurrently.