Modified 20-MAY-2011 Type HOWTO Status REVIEW_READY(INTERNAL) ***Oracle Confidential - Internal Use Only*** Priority 3
@ Oracle Confidential (INTERNAL). Do not distribute to customers
@ Reason: For RAC team internal
Applies to:
Oracle Universal Installer - Version: 11.2.0.2 to 11.2.0.2 - Release: 11.2 to 11.2
Information in this document applies to any platform.
Goal
The purpose of this bulletin is to provide information about 11.2.0.3 to 11.2.0.3 cloning step by step.
[This section is not visible to customers.]
Solution
The situation in which cloning is useful: Cloning enables you to create an installation (copy of a production, test, or development
installation) with all patches applied to it in a single step.
An Example of How to use cloning to "Clone Oracle Clusterware"
1.1 Preparing the Oracle Grid Infrastructure Home for Cloning
1.1.1 Install Oracle Clusterware on strdv05/strdv06
Basic configuration is as following:
HW: 2 nodes(strdv05/strdv06)
OS: OEL5(X86_64) with kernel 2.6.18-92.0.0.0.1.el5
Shiphome: 11.2.0.3.0(Development)
OCR and VD: on ASM
GNS: Disabled(NO GNS)
IPMI: Disabled(NO IPMI)
CRS OWNER: crsusr
CRS BASE: /u01/app/crsusr
CRS HOME(CH): /u01/11.2.0/grid
ORACLE OWNER: racusr
ORACLE BASE: /u01/app/racusr
ORACLE_HOME: /u01/app/racusr/product/11.2.0/db_home1
ORACLE INVENTORY: /u01/app/oraInventory
1.1.2 Shut Down Running Software
Issued 'crsctl stop crs' on strdv05 & strdv06
1.1.3 Create a Copy of the Oracle Grid Infrastructure Home
Invoke following script as root to create a copy of Grid_home in /ocfs2/cloned_crs, remove unnecessary files, and create a compress file
named gridHome.tgz.
-----------------<createClonecopy.sh>-------------------
#!/bin/sh
#Specfiy a tempory location which has sufficient space for a Grid Home copy
GI_COPY=/ocfs2/cloned_crs
#Specify current Grid Home location
GRID_HOME=/u01/11.2.0/grid
#Specify the file name and path where you want to create compressed Grid Home copy
GI_COPY_TAR=/ocfs2/gridHome.tgz
if [ -d "$GI_COPY" ]
then
rm -rf $GI_COPY/*
else
mkdir -p $GI_COPY
fi
cp -prf $GRID_HOME/* $GI_COPY/
rm -rf $GI_COPY/log/`hostname -s`
rm -rf $GI_COPY/gpnp/`hostname -s`
find $GI_COPY/gpnp -type f -exec rm -f {} \;
find $GI_COPY/cfgtoollogs -type f -exec rm -f {} \;
rm -rf $GI_COPY/crs/init/*
rm -rf $GI_COPY/cdata/*
rm -rf $GI_COPY/crf/*
rm -rf $GI_COPY/network/admin/*.ora
rm -rf $GI_COPY/root.sh*
find $GI_COPY/ -name '*.ouibak' -exec rm {} \;
find $GI_COPY/ -name '*.ouibak.1' -exec rm {} \;
TMP_PWD=$PWD
cd $GI_COPY
tar -zcvf $GI_COPY_TAR .
cd $TMP_PWD
------------------------------------------
1.2 Creating a Cluster by Cloning Oracle Clusterware
1.2.1 Prepare the New Cluster Nodes
Prepare two new cluster nodes: strdv07/strdv08
1.2.2 Deploy the Oracle Grid Infrastructure Home on the Destination Nodes, and run the clone.pl Script on Each Destination Node
[root@strdv05:~]# scp -p /ocfs2/gridHome.tgz crsusr@strdv07:/ocfs2
[root@strdv05:~]# scp -p /ocfs2/gridHome.tgz crsusr@strdv08:/ocfs2
Invoke following script as root on strdv07&strdv08 (uncompress /ocfs2/gridHome.tgz to /u01/11.2.0/grid, and run clone.pl on strdv07&strdv08)
------------ <deployClone.sh>-----------
#!/bin/sh
INVENTORY_LOCATION=/u01/app/oraInventory
ORACLE_BASE=/u01/app/crsusr
GRID_HOME=/u01/11.2.0/grid
GI_COPY_TAR=/ocfs2/gridHome.tgz
GRID_HOME_NAME=Ora11g_gridinfrahome1
CLONE_USR=crsusr
CLONE_GROUP=oinstall
NODE1=strdv07
NODE2=strdv08
if [ -d "$GRID_HOME" ]
then
rm -rf $GRID_HOME/*
else
mkdir -p $GRID_HOME
fi
tar -zxvf $GI_COPY_TAR -C $GRID_HOME
chown -R $CLONE_USR:$CLONE_GROUP $GRID_HOME
chmod u+s $GRID_HOME/bin/oracle
chmod g+s $GRID_HOME/bin/oracle
chmod u+s $GRID_HOME/bin/extjob
chmod u+s $GRID_HOME/bin/jssu
chmod u+s $GRID_HOME/bin/oradism
if [ -d "$INVENTORY_LOCATION" ]
then
rm -rf $INVENTORY_LOCATION
fi
mkdir -p $INVENTORY_LOCATION
chown $CLONE_USR:$CLONE_GROUP $INVENTORY_LOCATION
if [ -d "$ORACLE_BASE" ]
then
rm -rf $ORACLE_BASE
fi
mkdir -p $ORACLE_BASE
chown $CLONE_USR:$CLONE_GROUP $ORACLE_BASE
THIS_NODE=`hostname -s`
E01=ORACLE_BASE=${ORACLE_BASE}
E02=ORACLE_HOME=${GRID_HOME}
E03=ORACLE_HOME_NAME=${GRID_HOME_NAME}
E04=INVENTORY_LOCATION=${INVENTORY_LOCATION}
#C00="-O'-debug'"
C01="'-O\"CLUSTER_NODES={$NODE1,$NODE2}\"'"
C02="'-O\"LOCAL_NODE=${THIS_NODE}\"'"
su $CLONE_USR -c "perl ${GRID_HOME}/clone/bin/clone.pl -silent $E01 $E02 $E03 $E04 $C01 $C02"
---------------------------------------------
Above script would ask user to execute 2 scripts to finish configuration. You would find prompts similar as below
End of install phases.(Monday, March 21, 2011 11:28:27 PM PDT)
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script at '/u01/app/oraInventory/orainstRoot.sh' with root privileges on nodes 'strdv07'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/app/oraInventory/orainstRoot.sh #On nodes strdv07
/u01/11.2.0/grid/root.sh #On nodes strdv07
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
Run the script on the local node first. After successful completion, you can run the script in parallel on all the other nodes.
The cloning of Ora11g_gridinfrahome1 was successful.
Please check '/u01/app/oraInventory/logs/cloneActions2011-03-21_11-24-01PM.log' for more details.
In user environment, they are actually orainstRoot.sh under central inventory and root.sh under Grid Home. They should be executed as root user
So please run following command on related nodes:
[root@strdv07:~]# /u01/app/oraInventory/orainstRoot.sh <== Run on strdv07
[root@strdv08:~]# /u01/app/oraInventory/orainstRoot.sh <== Run on strdv08
[root@strdv07:~]# /u01/11.2.0/grid/root.sh <== Run on strdv07
[root@strdv07:~]# /u01/11.2.0/grid/root.sh
Check /u01/11.2.0/grid/install/root_strdv07_2011-03-21_23-35-16.log for the output of root script
1.2.3 Launch the Configuration Wizard
Following the output of root.sh, it may ask for exectuting $GRID_HOME/crs/install/roothas.pl or config.sh under $GRID_HOME/crs/config . You could see prompts similar as below.
# cat /u01/11.2.0/grid/install/root_strdv07_2011-03-21_23-35-16.log
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= crsusr
ORACLE_HOME= /u01/11.2.0/grid
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
To configure Grid Infrastructure for a Stand-Alone Server run the following command as the root user:
/u01/11.2.0/grid/perl/bin/perl -I/u01/11.2.0/grid/perl/lib -I/u01/11.2.0/grid/crs/install /u01/11.2.0/grid/crs/install/roothas.pl
To configure Grid Infrastructure for a Cluster execute the following command:
/u01/11.2.0/grid/crs/config/config.sh
This command launches the Grid Infrastructure Configuration Wizard. The wizard also supports silent operation, and the parameters can be passed through the response file that is available in the installation media.
Since we want to configure GI for a cluster, choose to execute $GRID_HOME/crs/config/config.sh as clone user. This script provides GUI for user to accompish configuration. This configuration wizard will lead user to set corresponding parameters (cluster name, scan, gns, cluser nodelist and VIPs, NICs, disks for ASM DG). After a prerequiste check by CVU, the wizard will ask user to execute $GRID_HOME/root.sh as root user. Finish running that root script and continue to the end of OUI.
[crsusr@strdv07:~]# /u01/11.2.0/grid/crs/config/config.sh
<==Run as crsusr in x-win environment of strdv07
<==Select "Configure Oracle Grid Infrastructure for a Cluster"
1.3 Clone Oracle RAC
1.3.1 Create a copy of the Oracle RAC home
[root@strdv05:~]# cd /u01/app/racusr/product/11.2.0/db_home1
[root@strdv05:~]# tar -zcvf /ocfs2/db1120.tgz .
1.3.2 Deploy the Oracle RAC database software
Copy the clone of the Oracle home to all nodes.
[root@strdv05:~]# scp -p /ocfs2/db1120.tgz racusr@strdv07:/ocfs2
[root@strdv05:~]# scp -p /ocfs2/db1120.tgz racusr@strdv08:/ocfs2
Run following commands to create RAC_HOME,uncompress .zip package and change mode for files on all nodes
mkdir -p /u01/app/racusr/product/11.2.0/db_home1
cd /u01/app/racusr/product/11.2.0/db_home1
tar -zxvf /ocfs2/db1120.tgz
chown -R racusr:oinstall /u01/app/racusr/product/11.2.0/db_home1
cd bin
chmod u+s oradism nmo nmhs emtgtctl2 jssu extjob nmb oracle
chmod g+s emtgtctl2 oracle
1.3.3 Run the clone.pl script as rac owner on each node strdv07 & strdv08
Method 1:
Run following commands on strdv07
perl /u01/app/racusr/product/11.2.0/dbhome_1/clone/bin/clone.pl -silent ORACLE_BASE=/u01/app/racusr ORACLE_HOME=/u01/app/racusr/product/11.2.0/dbhome_1 ORACLE_HOME_NAME=OraDb11g_home1 INVENTORY_LOCATION=/u01/app/oraInventory '-O"CLUSTER_NODES={strdv07,strdv08}"' '-O"LOCAL_NODE=strdv07"'
Run following commands on strdv08
perl /u01/app/racusr/product/11.2.0/dbhome_1/clone/bin/clone.pl -silent ORACLE_BASE=/u01/app/racusr ORACLE_HOME=/u01/app/racusr/product/11.2.0/dbhome_1 ORACLE_HOME_NAME=OraDb11g_home1 INVENTORY_LOCATION=/u01/app/oraInventory '-O"CLUSTER_NODES={strdv07,strdv08}"' '-O"LOCAL_NODE=strdv08"'
Method 2: Run following script on strdv07&strdv08
# cat rac_clone_start.sh
ORACLE_BASE=/u01/app/racusr
ORACLE_HOME=/u01/app/racusr/product/11.2.0/db_home1
cd $ORACLE_HOME/clone
THISNODE=`hostname -s`
E01=ORACLE_HOME=${ORACLE_HOME}
E02=ORACLE_HOME_NAME=OraDb11g_home1
E03=ORACLE_BASE=${ORACLE_BASE}
C01="-O'CLUSTER_NODES={strdv07, strdv08}'"
C02="-O'LOCAL_NODE=$THISNODE'"
perl $ORACLE_HOME/clone/bin/clone.pl $E01 $E02 $E03 $C01 $C02
1.3.4 Run the $ORACLE_HOME/root.sh script on each node
Above clone.pl script would ask user to execute 1 script to finish configuration. You would find prompts similar as below
End of install phases.(Tuesday, March 22, 2011 2:24:07 AM PDT)
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/u01/app/racusr/product/11.2.0/db_home1/root.sh #On nodes strdv07
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
The cloning of OraDb11g_home1 was successful.
Please check '/u01/app/oraInventory/logs/cloneActions2011-03-22_02-18-15AM.log' for more details.
The output of root.sh is like following:
# /u01/app/racusr/product/11.2.0/db_home1/root.sh
Check /u01/app/racusr/product/11.2.0/db_home1/install/root_strdv07_2011-03-22_02-26-35.log for the output of root script
1.3.5 Run DBCA to create the Oracle RAC instances on each node
This step shows how to run the DBCA in silent mode and provide response file input to create the Oracle RAC instances.
The following example creates an Oracle RAC database named ORCL on each node, creates database instances on each node, registers the instances in OCR, creates the database files in the Oracle ASM disk group called STRDV0708OV, and creates sample schemas. It also sets the SYS, SYSTEM, SYSMAN and DBSNMP passwords to password, which is the password for each account:
export ORACLE_HOME=/u01/app/racusr/product/11.2.0/db_home1
cd $ORACLE_HOME/bin/
./dbca -silent -createDatabase -templateName General_Purpose.dbc \
-gdbName ORCL -sid ORCL \
-sysPassword password -systemPassword password \
-sysmanPassword password -dbsnmpPassword password \
-emConfiguration LOCAL \
-storageType ASM -diskGroupName STRDV0708OV \
-datafileJarLocation $ORACLE_HOME/assistants/dbca/templates \
-nodelist strdv07,strdv08 -characterset WE8ISO8859P1 \
-obfuscatedPasswords false -sampleSchema true
1.4 Using Cloning to Add Nodes to a Cluster
1.4.1 Prepare two new nodes: strdv07/strdv08, and created a copy of CRS as described in 1.1(1.1.1-1.1.3)
1.4.2 Deploy the Oracle Grid Infrastructure home on two new nodes
[root@strdv05:~]# scp -p /ocfs2/gridHome.tgz crsusr@strdv07:/ocfs2
[root@strdv05:~]# scp -p /ocfs2/gridHome.tgz crsusr@strdv08:/ocfs2
Invoke following script as root on strdv07&strdv08 to finish Step 2 & Step 3 (uncompress /ocfs2/gridHome.tgz to /u01/11.2.0/grid, and run clone.pl on strdv07&strdv08)
------------ <deployCloneToAddNode.sh>-----------
#!/bin/sh
INVENTORY_LOCATION=/u01/app/oraInventory
ORACLE_BASE=/u01/app/crsusr
GRID_HOME=/u01/11.2.0/grid
GI_COPY_TAR=/ocfs2/gridHome.tgz
GRID_HOME_NAME=Ora11g_gridinfrahome1
CLONE_USR=crsusr
CLONE_GROUP=oinstall
NODE1=strdv07
NODE2=strdv08
if [ -d "$GRID_HOME" ]
then
rm -rf $GRID_HOME/*
else
mkdir -p $GRID_HOME
fi
tar -zxvf $GI_COPY_TAR -C $GRID_HOME
chown -R $CLONE_USR:$CLONE_GROUP $GRID_HOME
chmod u+s $GRID_HOME/bin/oracle
chmod g+s $GRID_HOME/bin/oracle
chmod u+s $GRID_HOME/bin/extjob
chmod u+s $GRID_HOME/bin/jssu
chmod u+s $GRID_HOME/bin/oradism
if [ -d "$INVENTORY_LOCATION" ]
then
rm -rf $INVENTORY_LOCATION
fi
mkdir -p $INVENTORY_LOCATION
chown $CLONE_USR:$CLONE_GROUP $INVENTORY_LOCATION
if [ -d "$ORACLE_BASE" ]
then
rm -rf $ORACLE_BASE
fi
mkdir -p $ORACLE_BASE
chown $CLONE_USR:$CLONE_GROUP $ORACLE_BASE
1.4.3 Run the clone.pl script located in the Grid_home/clone/bin directory on Node strdv07/strdv08.
perl clone.pl ORACLE_HOME=Grid_home ORACLE_HOME_NAME=Grid_home_name ORACLE_BASE=ORACLE_BASE SHOW_ROOTSH_CONFIRMATION=false
eg:
su - crsusr
cd /u01/11.2.0/grid/clone/bin
perl clone.pl ORACLE_HOME=/u01/11.2.0/grid ORACLE_HOME_NAME=Ora11g_gridinfrahome1 ORACLE_BASE=/u01/app/crsusr SHOW_ROOTSH_CONFIRMATION=false
Above clone.pl script would ask user to execute 1 script to finish configuration. You would find prompts similar as below
End of install phases.(Tuesday, March 22, 2011 6:58:20 AM PDT)
WARNING:A new inventory has been created in this session. However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script '/u01/app/oraInventory/orainstRoot.sh' with root privileges.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user.
/u01/app/oraInventory/orainstRoot.sh
/u01/11.2.0/grid/root.sh
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts
Run the script on the local node.
The cloning of Ora11g_gridinfrahome1 was successful.
Please check '/u01/app/oraInventory/logs/cloneActions2011-03-22_06-56-14AM.log' for more details.
If you are prompted to run root.sh, then ignore the prompt and proceed to the next step.
(There's a bug: Bug 11726018: CLONE.PL MESSAGE TO RUN ROOTSCRIPT
OUI_11.2.0.3.0_LINUX.X64_110313 contains changes from hrangasw_bug-11726018_11.2.0.3)
1.4.4 Run orainstRoot.sh on the new added node as prompted
# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
1.4.5. Run the following CVU command from the Grid_home/bin directory on source node to verify that new nodes are ready to add:
$ cluvfy stage -pre nodeadd -n strdv07,strdv08 -vip strdv07-vip,strdv08-vip
<==Please start the CRS stack before running this command on the source node
1.4.6. Run the addNode.sh (addNode.bat on Windows) script, located in the Grid_home/oui/bin directory, on the source node(Node 1), as follows:
$ addNode.sh -silent -noCopy ORACLE_HOME=Grid_home "CLUSTER_NEW_NODES={node1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node1-vip}" "CLUSTER_NEW_VIPS={node1-vip}" CRS_ADDNODE=true CRS_DHCP_ENABLED=false
eg: run following command on strdv05(the source node)
cd /u01/11.2.0/grid/oui/bin
addNode.sh -silent -noCopy ORACLE_HOME=/u01/11.2.0/grid "CLUSTER_NEW_NODES={strdv07,strdv08}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={strdv07-vip,strdv08-vip}" "CLUSTER_NEW_VIPS={strdv07-vip,strdv08-vip}" CRS_ADDNODE=true CRS_DHCP_ENABLED=false
Notes:
* Because you already ran the clone.pl script on the destination nodes(Node 2), this step only updates the inventories on the node and instantiates scripts on the local node.
* If you use the -noCopy option with the addNode.sh script, then a copy of the password file may not exist on Node 2, in which case you must copy a correct password file to Node 2.
You will be prompted to run root.sh, please proceed to the next step first and then run root.sh.
1.4.7. Copy the following files from Node 1, on which you ran addnode.sh, to Node 2:
<==Please pay attention to the permission, they should be copied as crsusr
Grid_home/crs/install/crsconfig_addparams
Grid_home/crs/install/crsconfig_params
Grid_home/gpnp
eg:
su - crsusr
cd /u01/11.2.0/grid/crs/install
scp crsconfig_addparams strdv07:/u01/11.2.0/grid/crs/install/
scp crsconfig_params strdv07:/u01/11.2.0/grid/crs/install/
scp -r /u01/11.2.0/grid/gpnp strdv07:/u01/11.2.0/grid
cd /u01/11.2.0/grid/crs/install
scp crsconfig_addparams strdv08:/u01/11.2.0/grid/crs/install/
scp crsconfig_params strdv08:/u01/11.2.0/grid/crs/install/
scp -r /u01/11.2.0/grid/gpnp strdv08:/u01/11.2.0/grid
1.4.8. On destination nodes(Node 2), run the Grid_home/grid/root.sh script. Run this script first on Node 2 and then, after the script completes, you can run it in parallel on the rest of the nodes in the cluster. The following example is for a Linux or UNIX system. On Node 2, run the following command:
[root@node2 root]# /u01/11.2.0/grid/root.sh
Ensure that the root.sh script has completed on Node 2 before running it on subsequent nodes.
The root.sh script automatically configures the following node applications:
* Global Services Daemon (GSD)
* Oracle Notification Service (ONS)
* Virtual IP (VIP) resources in the Oracle Cluster Registry (OCR)
* Single Client Access Name (SCAN) VIPs and SCAN listeners
* Oracle ASM
On Windows, run the following command on Node 2:
C:\>Grid_home\crs\config\gridconfig.bat
1.4.9. Navigate to the Oracle_home/oui/bin directory on the source node(Node 1) and run the addNode.sh script using the following syntax:
$ ./addNode.sh -silent -noCopy "CLUSTER_NEW_NODES={node2}"
eg:
cd /u01/app/racusr/product/11.2.0/db_home1/oui/bin
./addNode.sh -silent "CLUSTER_NEW_NODES={strdv07,strdv08}"
<==run as crsusr on strdv05 if need to copy the soft to the destination
./addNode.sh -silent -noCopy "CLUSTER_NEW_NODES={strdv07,strdv08}"
<==run as crsusr on strdv05 if Oracle home is fully populated with soft
Note:
Use the -noCopy option only if the Oracle home on the destination node is fully populated with software. Otherwise, omit the -noCopy option to copy the software to the destination node when you run the addnode.sh script.
1.4.10. Run the Oracle_home/root.sh script on Node 2(strdv07/strdv08) as root, where Oracle_home is the Oracle RAC home.
/u01/app/racusr/product/11.2.0/db_home1/root.sh
1.4.11. Run the Grid_home/crs/install/rootcrs.pl script on Node 2 as root, following the instructions generated when you ran root.sh in the previous step.
1.4.12. Run the following cluster verification utility (CVU) command on Node 2:
$ cluvfy stage -post nodeadd -n destination_node_name [-verbose]
eg:
$ cluvfy stage -post nodeadd -n strdv07,strdv08 [-verbose]
For more information, please refer to:
Clone Oracle Clusterware:
Creating a Cluster by Cloning Oracle Clusterware
Using Cloning to Add Nodes to a Cluster
http://st-doc.us.oracle.com/review/rsb/html/E16794_09/clonecluster.htm
Using cloning to extend Oracle RAC to nodes in a New Cluster
http://st-doc.us.oracle.com/review/rsb/html/E16795_09/clonerac.htm
Using cloning to extend Oracle RAC to nodes in an existing cluster
http://st-doc.us.oracle.com/review/rsb/html/E16795_09/cloneracwithoui.htm
Notes: after finishing above steps, CRS and RAC adding is finished, but you still need to add database instance to existing RAC database.
[This section is not visible to customers.]
References
NOTE:300062.1 - How To Clone An Existing RDBMS Installation Using OUI
NOTE:444617.1 - HOWTO input variables to CRS home cloning procedure clone.pl
NOTE:458450.1 - Steps to Manually Clone a Database
NOTE:549268.1 - How To Clone An Existing RDBMS Installation Using EMGC
NOTE:559863.1 - An Example Of How To Clone An Existing Oracle9i Release 2 (9.2.0.x) RDBMS Installation
NOTE:562556.1 - How to Manually Clone a Database to Another Node
浙公网安备 33010602011771号