RAC 卸载 说明

 

 今天本来是打算做RAC RMAN 备份的实验的。 可是有个问题一直没有解决,是之前安装RAC时遗留的。 因为这个给操作带来了很多麻烦,所以决定先把这个问题了。

      

       上次安装的时候没有注意顺序,结果是ASM2 实例在RAC1 节点上, ASM1 RAC2 节点上。 这样每次启动RAC 环境的时候都会很麻烦。  要解决这个问题只能先卸载数据库实例,在卸载ASM. 在重新安装。  在网上google搜了一下,整理如下,顺便做个实验,验证一下。  

 

 

 

一.RAC 删除数据库

This section explains how to delete a RAC database with the DBCA. This process deletes a database and removes a database's initialization parameter files, instances, OFA structure, and Oracle network configuration. However, this process does not remove datafiles if you placed the files on raw devices or on raw partitions.

To delete a database with the DBCA:

1.      Start the DBCA on one of the nodes:

The DBCA Welcome page appears.

2.      Select Oracle Real Application Clusters and click Next.

After you click Next, the DBCA displays the Operations page.

3.      Select Delete a database, click Next, and the DBCA displays the List of Cluster Databases page.

4.      If your user ID and password are not operating-system authenticated, then the List of Cluster Databases page displays the user name and password fields. If these fields appear, then enter a user ID and password that has SYSDBA privileges.

5.      Select the database to delete and click Finish.

After you click Finish, the DBCA displays a dialog to confirm the database and instances that the DBCA is going to delete.

6.      Click OK to begin the deletion of the database and its associated files, services, and environment settings, or click Cancel to stop the operation.

When you click OK, the DBCA continues the operation and deletes all of the associated instances for this database. The DBCA also removes the parameter files, password files, and oratab entries.

At this point, you have accomplished the following:

·         Deleted the selected database from the cluster

·         Deleted high availability services that were assigned to the database

·         Deleted the Oracle Net configuration for the database

·         Deleted the OFA directory structure from the cluster

·         Deleted the datafiles if the datafiles were not on raw devices

 

二. ASM 实例卸载

 

How to drop the ASM instance installed in a seperate Oracle Home for both RAC and non-RAC installations.

 

Solution

 

The outline of the steps involved are :
a) Backup all the ASM client database files stored on the diskgroups.
b) Dropping all the diskgroups.
c) Removing ASM resource from CRS (* RAC specific)
d) Removing ASM disk signature (In case of asmlib)
e) Remove the ASM pfile/spfile.
f) Removing ASM entry in the file oratab
g) Wipe out the disks header using dd

 

Following are the steps to be followed:
1) Log into the ASM instance and do 'select * from v$asm_client;'

      

2) For each instance listed above, stop the respective databases.

 

3) Backup all the datafiles, logfiles, controlfiles, archive logs, etc. that are currently using ASM storage, to tape or to filesystem (using RMAN). This needs to be done for every database (ASM client) using ASM.

** NOTE: Please make sure you have the data secure before continuing to the next step.

 

4) Find all the diskgroups: 'select * from v$asm_diskgroup'

5) For each diskgroup listed above:

' drop diskgroup <name> including contents'

 

这里要注意的,要先在一个节点上将diskgroup dismount后, 在另一个节点进行drop。不然会报:

       ORA-15073: diskgroup DATA is mounted by another ASM instance

 

alter diskgroup <name> dismount;

 

6) Shutdown all(RAC nodes) ASM instances.

 

7) On RAC install verify that all asm instances are stopped

$ORA_CRS_HOME/bin/crst_stat |more <- look for ASM resources and make sure the target=offline

 

8) For single instance install run the following script:

$ORACLE_HOME/bin/localconfig  delete

* This cleans up the CSSD configuration.

 

9) Invoke OUI, and now de-install the ASM Oracle home.

 

10) For RAC install, remove the asm related resource.

srvctl remove asm -n <nodename>  <- Peform for all nodes of a RAC cluster

crs_stat |more <- make sure no asm resources exists

 

       如:srvctl remove asm –n rac1, 如果删除不掉,就加上 –f 参数。

 

11) If using asmlib (on Linux only), then

a. oracleasm listdisks
b. oracleasm deletedisks (do this for every disk listed above)
c. oracleasm listdisks (to verify they have been deleted)
d. on other RAC nodes: oracleasm listdisks (to verify they have been deleted too)
e. On all nodes(RAC) :
As root run:
# /etc/init.d/oracleasm stop
# /etc/init.d/oracleasm disable

 

12) delete the ASM pfile or spfile

 

13) in the file /etc/oratab, remove the line relative to the ASM instance

 

14) clean out the disks headers using the dd command:

for example: dd if=/dev/zero of=/dev/<asm_disk_name> bs=1024k count=50

 

[root@rac2 ~]# dd if=/dev/zero of=/dev/sdd1 bs=1024k count=1

1+0 records in

1+0 records out

1048576 bytes (1.0 MB) copied, 0.026078 seconds, 40.2 MB/s

 

 

.  Clusterware安装失败情况下的卸载

 

How to Clean Up After a Failed 10g or 11.1 Oracle Clusterware Installation

10g and 11.1 RAC: How to Clean Up After a Failed CRS Install
------------------------------------------------------------
Not cleaning up a failed CRS install can cause problems like node reboots.
Follow these steps to clean up a failed CRS install:


1. Run the rootdelete.sh script then the rootdeinstall.sh script from the $ORA_CRS_HOME/install directory on any nodes you are removing CRS from. 

       Running these scripts should be sufficent to clean up your CRS install.  Rootdelete.sh accepts options like nosharedvar/sharedvar, and  nosharedhome/sharedhome. Make yourself familiar with these options by reading the Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide.If you have any problems with these scripts please open a service request. 

If for some reason you have to manually remove the install due to problems with the scripts, continue to step 2:

   

2. Stop the Nodeapps on all nodes:

          srvctl stop nodeapps -n

 

3. Prevent CRS from starting when the node boots.  To do this issue the following as root:

 

Sun:

       rm /etc/init.d/init.cssd

       rm /etc/init.d/init.crs

       rm /etc/init.d/init.crsd

       rm /etc/init.d/init.evmd

       rm /etc/rc3.d/K96init.crs

       rm /etc/rc3.d/S96init.crs

        rm -Rf /var/opt/oracle/scls_scr

        rm -Rf /var/opt/oracle/oprocd

       rm /etc/inittab.crs

       cp /etc/inittab.orig /etc/inittab

 

Linux:

        rm /etc/oracle/*

       rm -f /etc/init.d/init.cssd

       rm -f /etc/init.d/init.crs

       rm -f /etc/init.d/init.crsd

       rm -f /etc/init.d/init.evmd

       rm -f /etc/rc2.d/K96init.crs

       rm -f /etc/rc2.d/S96init.crs

       rm -f /etc/rc3.d/K96init.crs

       rm -f /etc/rc3.d/S96init.crs

       rm -f /etc/rc5.d/K96init.crs

       rm -f /etc/rc5.d/S96init.crs

        rm -Rf /etc/oracle/scls_scr

       rm -f /etc/inittab.crs

       cp /etc/inittab.orig /etc/inittab

 

HP-UX:

       rm /sbin/init.d/init.cssd

       rm /sbin/init.d/init.crs

       rm /sbin/init.d/init.crsd

       rm /sbin/init.d/init.evmd

        rm /sbin/rc2.d/K960init.crs

        rm /sbin/rc2.d/K001init.crs

       rm /sbin/rc3.d/K960init.crs

       rm /sbin/rc3.d/S960init.crs

        rm -Rf /var/opt/oracle/scls_scr

        rm -Rf /var/opt/oracle/oprocd

       rm /etc/inittab.crs

       cp /etc/inittab.orig /etc/inittab

 

HP Tru64:

       rm /sbin/init.d/init.cssd

       rm /sbin/init.d/init.crs

       rm /sbin/init.d/init.crsd

       rm /sbin/init.d/init.evmd

       rm /sbin/rc3.d/K96init.crs

       rm /sbin/rc3.d/S96init.crs

        rm -Rf /var/opt/oracle/scls_scr

        rm -Rf /var/opt/oracle/oprocd

       rm /etc/inittab.crs

       cp /etc/inittab.orig /etc/inittab

 

IBM AIX:

       rm /etc/init.cssd

       rm /etc/init.crs

       rm /etc/init.crsd

       rm /etc/init.evmd

       rm /etc/rc.d/rc2.d/K96init.crs

       rm /etc/rc.d/rc2.d/S96init.crs

        rm -Rf /etc/oracle/scls_scr

        rm -Rf /etc/oracle/oprocd

       rm /etc/inittab.crs

       cp /etc/inittab.orig /etc/inittab

 

4. If they are not already down, kill off EVM, CRS, and CSS processes or reboot the node:

       ps -ef | grep crs

       kill

       ps -ef | grep evm

       kill

       ps -ef | grep css      

       kill

 

   Do not kill any OS processes, for example icssvr_daemon process !


 

5. If there is no other Oracle software running (like listeners, DB's, etc...), you can remove the files in /var/tmp/.oracle or /tmp/.oracle.  Example:

        rm -f /var/tmp/.oracle/*

        or

        rm -f /tmp/.oracle/*

 

6. Remove the ocr.loc Usually the ocr.loc can be found at /etc/oracle

 

7. De-install the CRS home in the Oracle Universal Installer

 

8. Remove the CRS install location.

 

9. Clean out the OCR and Voting Files with dd commands.  Example:

        dd if=/dev/zero of=/dev/rdsk/V1064_vote_01_20m.dbf bs=1M count=256
        dd if=/dev/zero of=/dev/rdsk/ocrV1064_100m.ora bs=1M count=256

   See the Clusterware Installation Guide for sizing requirements... 

   If you placed the OCR and voting disk on a shared filesystem, remove them.

   If you are removing the RDBMS installation, also clean out any ASM disks if they have already been used.

10. The /tmp/CVU* dir should be cleaned also to avoid the cluvfy misreporting.

11. It is good practice to reboot the node before starting the next install.

12. If you would like to re-install CRS, follow the steps in the RAC Installation manual.



Oracle 11g,执行如下操作:

清除家目录:

rm -rf /u01/app/grid_home

rm -rf /home/oracle

清除相关文件:

rm -rf /tmp/.oracle

rm -rf /var/tmp/.oracle

rm -rf /etc/init/oracle-ohasd.conf

rm -rf /etc/init.d/ohasd

rm -rf /etc/init.d/init.ohasd

rm -rf /etc/oraInst.loc
rm -rf /etc/oratab

rm -rf /etc/oracle

清除ASM磁盘信息:

dd if=/dev/zero of=/dev/sdxx bs=8192 count=128000

 

 

四.      RAC卸载步骤

 

上面三个分别介绍了单独卸载的方法。 现在来看一下,如何来卸载一个安装成功的RAC 环境。

 

 

卸载步骤:

       1 删除数据库

       2 利用Netca工具删除监听设置

       3 删除Oracle 软件

       4 CLUSTER目录下提供的shell来清除CLUSTER系统的所有修改

       5 清除OCR中的信息

       6 卸载ClusterWare

 

 

4.1 删除数据库

      

为了更好的卸载数据库,最好是保证数据库处于打开状态,这样DBCA就可以根据读取Oracle数据文件的信息并进行删除。 启动DBCA图形界面,选择Oracle Real Application Cluster database选择。 选择Delete a Databasese 然后选择FINISH,删除整个数据库文件。

 

操作方法和第一节一样.

 

4.2 利用NETCA工具删除监听设置。

      

 

4.3 删除Oracle 软件

 

进入$ORACLE_HOME/oui/bin/ 目录,运行runInstaller命令。 进入WELCOME界面后,直接点击Installed Products,这时会弹出一个对话框,在Oracle Homes中有两个对象:OraCrs10g_homeOraDb10g_home

 

在删除的时候一定要先选择卸载OraDb10g_home,因为还需要运行Cluster目录下的shell脚本来清除CLUSTER对操作系统进行的修改,所以OraCrs10g_home的卸载放到后面的步骤中进行。

 

选中OraDb11g_home点击REMOVE即可进行卸载。

 

4.4CLUSTER目录下提供的shell来清除CLUSTER系统的所有修改

 

root用户执行如下脚本:

on local node:
$ORA_CRS_HOME/install/rootdelete.sh local nosharedvar nosharedhome

on remote nodes:
$ORA_CRS_HOME/install/rootdelete.sh remote nosharedvar nosharedhome

 

执行时,可以加上 -force 参数。

 

注意:这里的操作必须一个节点一个节点的执行,不能并行操作,就如同安装的时候执行root.sh一样!

 

rac1节点执行:

 

[root@rac1 ~]# cd /u01/app/oracle/product/crs/install

[root@rac1 install]# ./rootdelete.sh local nosharedvar nosharedhome

CRS-0210: Could not find resource 'ora.rac1.LISTENER_RAC1.lsnr'.

Shutting down Oracle Cluster Ready Services (CRS):

Sep 17 13:27:28.917 | INF | daemon shutting down

Stopping resources. This could take several minutes.

Successfully stopped CRS resources.

Stopping CSSD.

Shutting down CSS daemon.

Shutdown request successfully issued.

Shutdown has begun. The daemons should exit soon.

Checking to see if Oracle CRS stack is down...

Oracle CRS stack is not running.

Oracle CRS stack is down now.

Removing script for Oracle Cluster Ready services

Updating ocr file for downgrade

Cleaning up SCR settings in '/etc/oracle/scls_scr'

 

rac2节点执行:

 

[root@rac2 ~]#  cd /u01/app/oracle/product/crs/install

[root@rac2 install]# ./rootdelete.sh remote nosharedvar nosharedhome

CRS-0210: Could not find resource 'ora.rac2.LISTENER_RAC2.lsnr'.

Shutting down Oracle Cluster Ready Services (CRS):

Sep 17 13:29:48.144 | INF | daemon shutting down

Stopping resources. This could take several minutes.

Successfully stopped CRS resources.

Stopping CSSD.

Shutting down CSS daemon.

Shutdown request successfully issued.

Shutdown has begun. The daemons should exit soon.

Checking to see if Oracle CRS stack is down...

Oracle CRS stack is not running.

Oracle CRS stack is down now.

Removing script for Oracle Cluster Ready services

Updating ocr file for downgrade

Cleaning up SCR settings in '/etc/oracle/scls_scr'

 

4.5 清除OCR

只需要用root用户在本地节点执行如下命令:

$ORA_CRS_HOME/install/rootdeinstall.sh

 

在一个节点执行即可:

[root@rac1 install]# ./rootdeinstall.sh

Removing contents from OCR mirror device

2560+0 records in

2560+0 records out

10485760 bytes (10 MB) copied, 0.774432 seconds, 13.5 MB/s

Removing contents from OCR device

2560+0 records in

2560+0 records out

10485760 bytes (10 MB) copied, 1.36228 seconds, 7.7 MB/s

 

4.6 卸载clusterware

 

进入$ORA_CRS_HOME/oui/bin/ 目录,运行runInstaller命令. 进入WELCOME界面后,点击Installed Products,在弹出的对话框中选择OraCrs10g_home,然后点击REMOVE

 

删除/var/opt目录下的Oracle信息和ORACLE_BASE目录:

 

# rm -rf /data/oracle

# rm -rf /var/opt/oracle

 

删除/usr/local/bin目录下的设置:

 

# rm /usr/local/bin/dbhome

# rm /usr/local/bin/oraenv

# rm /usr/local/bin/coraenv

 

利用操作系统命令检查一下,系统中是否还残留一些Oracle的设置:

 

 

# find / -name oracle

 

 

整个Oracle数据库和RAC环境清除工作至此结束,可以重新进行ClusterwareRAC的安装了。这里没有对几个raw设备进行,因为在安装Clusterware的时候,会自动对他们进行格式化。

 

 

 

 

------------------------------------------------------------------------------

Blog http://blog.csdn.net/tianlesoftware

网上资源: http://tianlesoftware.download.csdn.net

相关视频:http://blog.csdn.net/tianlesoftware/archive/2009/11/27/4886500.aspx

DBA1 群:62697716(); DBA2 群:62697977()

DBA3 群:63306533;     聊天 群:40132017

--加群需要在备注说明SGA的组成部分,否则拒绝申请

 

posted @ 2010-09-18 02:58  davedba  阅读(243)  评论(0编辑  收藏  举报