【环境安装和配置】03-搭建Oracle-11gR2基于ASM的单实例
0 概论
Oracle 基于ASM 的单实例安装相比与一般的单实例安装多了一个 ASM 磁盘组的管理部分。
具体说来包括 :
grid 用户的添加和配置
添加共享磁盘
grid 用户下安装 cluster 管理共享磁盘组
然后在 oracle 用户中安装单实例的时候选择 ASM 磁盘管理方式,配上创建好的 ASM 磁盘组。
其实再加上双节点中互信配置,就是一个RAC了。然后安装DG备库,RAC主节点上和DG库做一些设置,一个基本的RAC+DG的高可用Oracle数据库就出来了。后面我们可以尝试一下。
ASM下的单实例,首先要确保 grid 下的磁盘组管理 ASM 实例运行正常,才能启动 oracle 用户的数据库实例。而且监听也是在 grid 下的。
1 添加组和用户并设置环境变量
1.1 建立用户组和用户
groupadd -g 1000 oinstall
groupadd -g 1200 asmadmin
groupadd -g 1201 asmdba
groupadd -g 1202 asmoper
groupadd -g 1300 dba
groupadd -g 1301 oper
useradd -m -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,dba -d /home/grid -s /bin/bash grid
useradd -m -u 1101 -g oinstall -G dba,oper,asmdba -d /home/oracle -s /bin/bash oracle
1.2 确认用户信息
[root@rac1 ~]# id oracle
uid=502(oracle) gid=507(oinstall) groups=507(oinstall),502(dba),503(oper),506(asmdba)
[root@rac1 ~]# id grid
uid=1100(grid) gid=507(oinstall) groups=507(oinstall),504(asmadmin),506(asmdba),505(asmoper)
1.3 修改密码
passwd oracle
passwd grid
2 创建目录结构
mkdir -p /u01/app/grid/11.2.0
mkdir -p /u01/grid
chown -R grid:oinstall /u01
mkdir -p /u01/app/oracle/11.2.0/db_1
chown -R oracle:oinstall /u01/app/oracle
chmod -R 775 /u01
3 配置环境变量
3.1 Grid 用户
修改 grid 用户的.bash_profile. 注意每个节点的不同内容:
export ORACLE_SID=+ASM
export ORACLE_BASE=/u01/grid
export ORACLE_HOME=/u01/app/grid/11.2.0
export PATH=$ORACLE_HOME/bin:$PATH:/usr/local/bin/:.
export TEMP=/tmp
export TMP=/tmp
export TMPDIR=/tmp
umask 022
注意这个目录结构,这个会涉及到权限问题。
3.2 Oracle 用户
修改 oracle 用户的.bash_profile,注意每个节点的不同内容:
ORACLE_SID=dave; export ORACLE_SID
ORACLE_UNQNAME=dave; export ORACLE_UNQNAME
JAVA_HOME=/usr/local/java; export JAVA_HOME
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/11.2.0/db_1; export ORACLE_HOME
ORACLE_TERM=xterm; export ORACLE_TERM
NLS_DATE_FORMAT="YYYY:MM:DD HH24:MI:SS"; export NLS_DATE_FORMAT
NLS_LANG=american_america.ZHS16GBK; export NLS_LANG
TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN
ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11
PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin:$ORA_CRS_HOME/bin
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH
THREADS_FLAG=native; export THREADS_FLAG
export TEMP=/tmp
export TMPDIR=/tmp
umask 022
4 创建 15 个 3G 的共享设备并绑定成 raw 设备
这里直接使用 VirtualBox 命令来创建。 创建之前需要先关闭虚拟机。
4.1 创建共享目录 sharedisk
I:\VBox\sharedisk
4.2 创建虚拟介质
C:\Users\Dave>cd I:\VBox\sharedisk
C:\Users\Dave>I:
I:\VBox\sharedisk>
VBoxManage createhd --filename asm01.vdi --size 3072 --format VDI --variant Fixed
VBoxManage createhd --filename asm02.vdi --size 3072 --format VDI --variant Fixed
VBoxManage createhd --filename asm03.vdi --size 3072 --format VDI --variant Fixed
VBoxManage createhd --filename asm04.vdi --size 3072 --format VDI --variant Fixed
VBoxManage createhd --filename asm03.vdi --size 3072 --format VDI --variant Fixed
VBoxManage createhd --filename asm06.vdi --size 3072 --format VDI --variant Fixed
VBoxManage createhd --filename asm02.vdi --size 3072 --format VDI --variant Fixed
VBoxManage createhd --filename asm08.vdi --size 3072 --format VDI --variant Fixed
VBoxManage createhd --filename asm04.vdi --size 3072 --format VDI --variant Fixed
VBoxManage createhd --filename asm10.vdi --size 3072 --format VDI --variant Fixed
VBoxManage createhd --filename asm11.vdi --size 3072 --format VDI --variant Fixed
VBoxManage createhd --filename asm12.vdi --size 3072 --format VDI --variant Fixed
VBoxManage createhd --filename asm13.vdi --size 3072 --format VDI --variant Fixed
VBoxManage createhd --filename asm14.vdi --size 3072 --format VDI --variant Fixed
VBoxManage createhd --filename asm13.vdi --size 3072 --format VDI --variant Fixed
I:\VBox\sharedisk>VBoxManage createhd --filename asm01.vdi --size 3072 --format
VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: e9df898c-d527-4859-82fc-9c7240405b73
I:\VBox\sharedisk>VBoxManage createhd --filename asm02.vdi --size 3072 --format
VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: 903aa7fa-7ca1-4063-9bc1-26b2f18692a1
I:\VBox\sharedisk>VBoxManage createhd --filename asm03.vdi --size 3072 --format
VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: a869eb45-792c-41dd-9650-4fdfb98b9cb2
I:\VBox\sharedisk>VBoxManage createhd --filename asm04.vdi --size 3072 --format
VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: 73166656-1dae-441c-9be1-636ad265c47c
I:\VBox\sharedisk>VBoxManage createhd --filename asm03.vdi --size 3072 --format
VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: 49af6ff8-7560-4f36-bd8f-a8161b85fb6e
I:\VBox\sharedisk>VBoxManage createhd --filename asm06.vdi --size 3072 --format
VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: d09a3069-b793-4704-ac21-0e61b92bd8a8
I:\VBox\sharedisk>VBoxManage createhd --filename asm02.vdi --size 3072 --format
VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: ea49e0a7-5f31-4d3f-a334-d2fdf3b17893
I:\VBox\sharedisk>VBoxManage createhd --filename asm08.vdi --size 3072 --format
VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: 16eeb940-3356-4fd9-9400-1f0d3e7687e5
I:\VBox\sharedisk>VBoxManage createhd --filename asm04.vdi --size 3072 --format
VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: 1e7df866-554d-42c6-b9e9-8df26179bfd4
I:\VBox\sharedisk>VBoxManage createhd --filename asm10.vdi --size 3072 --format
VDI --variant Fixed
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Disk image created. UUID: d341c861-aa1e-4654-a5f9-a5cb5a555766
I:\VBox\sharedisk>
注意这里的 UUID,我们后面会用到这个 ID。
4.3 将虚拟介质匹配到虚拟机上
关闭虚拟机,执行如下操作。
C:\Users\Dave>cd I:\vbox\sharedisk
C:\Users\Dave>I:
VBoxManage storageattach ora11gAsm --storagectl "SATA 控制器" --port 1 --device 0 --type hdd --medium asm01.vdi --
mtype shareable
VBoxManage storageattach ora11gAsm --storagectl "SATA 控制器" --port 2 --device 0 --type hdd --medium asm02.vdi --
mtype shareable
VBoxManage storageattach ora11gAsm --storagectl "SATA 控制器" --port 3 --device 0 --type hdd --medium asm03.vdi --
mtype shareable
VBoxManage storageattach ora11gAsm --storagectl "SATA 控制器" --port 4 --device 0 --type hdd --medium asm04.vdi --
mtype shareable
VBoxManage storageattach ora11gAsm --storagectl "SATA 控制器" --port 5 --device 0 --type hdd --medium asm03.vdi --
mtype shareable
VBoxManage storageattach ora11gAsm --storagectl "SATA 控制器" --port 6 --device 0 --type hdd --medium asm06.vdi --
mtype shareable
VBoxManage storageattach ora11gAsm --storagectl "SATA 控制器" --port 7 --device 0 --type hdd --medium asm02.vdi --
mtype shareable
VBoxManage storageattach ora11gAsm --storagectl "SATA 控制器" --port 8 --device 0 --type hdd --medium asm08.vdi --
mtype shareable
VBoxManage storageattach ora11gAsm --storagectl "SATA 控制器" --port 9 --device 0 --type hdd --medium asm04.vdi --
mtype shareable
VBoxManage storageattach ora11gAsm --storagectl "SATA 控制器" --port 10 --device 0 --type hdd --medium asm10.vdi --
mtype shareable
VBoxManage storageattach ora11gAsm --storagectl "SATA 控制器" --port 11 --device 0 --type hdd --medium asm11.vdi --
mtype shareable
VBoxManage storageattach ora11gAsm --storagectl "SATA 控制器" --port 12 --device 0 --type hdd --medium asm12.vdi --
mtype shareable
VBoxManage storageattach ora11gAsm --storagectl "SATA 控制器" --port 13 --device 0 --type hdd --medium asm13.vdi --
mtype shareable
VBoxManage storageattach ora11gAsm --storagectl "SATA 控制器" --port 14 --device 0 --type hdd --medium asm14.vdi --
mtype shareable
VBoxManage storageattach ora11gAsm --storagectl "SATA 控制器" --port 15 --device 0 --type hdd --medium asm13.vdi --
mtype shareable
说明:
(1) 控制器名称
我这里写的是:"SATA 控制器",因为我的虚拟机是中文显示的,如下图:
如果是英语的,这里名称就要写成:"SATA Controller",如:
F:\VBox\sharedisk>VBoxManagestorageattach OraLinuxRAC1 --storagectl "SATA Controller" --
port 1--device 0 --type hdd --medium asm04.vdi --mtype shareable
(2)storageattach 对应的是 VM 中显示的虚拟机的名称,我这里 OraLinuxRAC1.
(3)如果要将磁盘加到其他的节点上,只需要更改这里的虚拟机名称即可。如:
F:\VBox\sharedisk>VBoxManage storageattach OraLinuxRAC2 --storagectl "SATA 控制器" --port
4 --device 0 --typehdd --medium asm04.vdi --mtype shareable
4.4 配置虚拟介质共享
这个在 VBOX 界面也可以操作,这里使用命令
此信息将被记入到每个虚拟磁盘文件的文件头中,故其他其他再使用的时候就无需再设置成共享
了。
实际上,我们在做基于 ASM 的单实例,是不需要做这个步骤的,我们这里是为了演示 RAC 的操作
步骤。
C:\Users\Dave>cd I:\vbox\sharedisk
C:\Users\Dave>I:
VBoxManage modifyhd asm01.vdi --type shareable
VBoxManage modifyhd asm02.vdi --type shareable
VBoxManage modifyhd asm03.vdi --type shareable
VBoxManage modifyhd asm04.vdi --type shareable
VBoxManage modifyhd asm03.vdi --type shareable
VBoxManage modifyhd asm06.vdi --type shareable
VBoxManage modifyhd asm02.vdi --type shareable
VBoxManage modifyhd asm08.vdi --type shareable
VBoxManage modifyhd asm04.vdi --type shareable
VBoxManage modifyhd asm10.vdi --type shareable
VBoxManage modifyhd asm11.vdi --type shareable
VBoxManage modifyhd asm12.vdi --type shareable
VBoxManage modifyhd asm13.vdi --type shareable
VBoxManage modifyhd asm14.vdi --type shareable
VBoxManage modifyhd asm13.vdi --type shareable
=============================================================================
创建虚拟机共享磁盘这部分,我建立的时候本机命令行不能用,或者是我没有找到位子,采用的是图形界面操作
step 1:添加磁盘到虚拟机
在虚拟机界面中编辑虚拟机,设置-->存储-->添加磁盘-->创建新的虚拟盘
创建磁盘选项
step 2:设置磁盘共享
管理-->虚拟介质管理
修改磁盘介质属性为可共享
有时候会遇到报错不能修改的情况,删掉磁盘重新建立
=============================================================================
4.5 配置 UDEV 绑定的 SCSI ID
如果是使用 VMware 虚拟机,直接输入 scsi_id 命令可能无法获取 id,需修改 VMware 文件参数,就是找到虚拟机的配置文件,在最后添加一行 disk.EnableUUID="TRUE",这里注意修改文件的时候一定要在关机的状态下修改该参数文件。控制文件如下图:
[root@asm ~]# /sbin/scsi_id -g -u /dev/sdb
1ATA_VBOX_HARDDISK_VBe9df898c-735b4040
用如下脚本获取绑定脚本:
for i in b c d e f g h i j k l m n o p;
do
echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id -g -u /dev/\$name\", RESULT==\"`/sbin/scsi_id -g -u /dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\""
done
[root@asm rules.d]# for i in b c d e f g h i j k;
> do
> echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id -g -u /dev/\$name\", RESULT==\"`/sbin/scsi_id -g -
u /dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\""
> done
KERNEL=="sd*",
BUS=="scsi",
PROGRAM=="/sbin/scsi_id
-g
-u
/dev/$name",
RESULT=="1ATA_VBOX_HARDDISK_VBe9df898c-735b4040", NAME="asm-diskb", OWNER="grid", GROUP="asmadmin",
MODE="0660"
KERNEL=="sd*",
BUS=="scsi",
PROGRAM=="/sbin/scsi_id
-g
-u
/dev/$name",
RESULT=="1ATA_VBOX_HARDDISK_VB903aa7fa-a19286f1", NAME="asm-diskc", OWNER="grid", GROUP="asmadmin",
MODE="0660"
KERNEL=="sd*",
BUS=="scsi",
PROGRAM=="/sbin/scsi_id
-g
-u
/dev/$name",
RESULT=="1ATA_VBOX_HARDDISK_VBa869eb45-b29c8bb9", NAME="asm-diskd", OWNER="grid", GROUP="asmadmin",
MODE="0660"
KERNEL=="sd*",
BUS=="scsi",
PROGRAM=="/sbin/scsi_id
-g
-u
/dev/$name",
RESULT=="1ATA_VBOX_HARDDISK_VB73166656-7cc465d2", NAME="asm-diske", OWNER="grid", GROUP="asmadmin",
MODE="0660"
KERNEL=="sd*",
BUS=="scsi",
PROGRAM=="/sbin/scsi_id
-g
-u
/dev/$name",
RESULT=="1ATA_VBOX_HARDDISK_VB49af6ff8-6efb851b", NAME="asm-diskf", OWNER="grid", GROUP="asmadmin",
MODE="0660"
KERNEL=="sd*",
BUS=="scsi",
PROGRAM=="/sbin/scsi_id
-g
-u
/dev/$name",
RESULT=="1ATA_VBOX_HARDDISK_VBd09a3069-a8d82bb9", NAME="asm-diskg", OWNER="grid", GROUP="asmadmin",
MODE="0660"
KERNEL=="sd*",
BUS=="scsi",
PROGRAM=="/sbin/scsi_id
-g
-u
/dev/$name",
RESULT=="1ATA_VBOX_HARDDISK_VBea49e0a7-9378b1f3", NAME="asm-diskh", OWNER="grid", GROUP="asmadmin",
MODE="0660"
KERNEL=="sd*",
BUS=="scsi",
PROGRAM=="/sbin/scsi_id
-g
-u
/dev/$name",
RESULT=="1ATA_VBOX_HARDDISK_VB16eeb940-e587763e", NAME="asm-diski", OWNER="grid", GROUP="asmadmin",
MODE="0660"
KERNEL=="sd*",
BUS=="scsi",
PROGRAM=="/sbin/scsi_id
-g
-u
/dev/$name",
RESULT=="1ATA_VBOX_HARDDISK_VB1e7df866-d4bf7961", NAME="asm-diskj", OWNER="grid", GROUP="asmadmin",
MODE="0660"
KERNEL=="sd*",
BUS=="scsi",
PROGRAM=="/sbin/scsi_id
-g
-u
/dev/$name",
RESULT=="1ATA_VBOX_HARDDISK_VBd341c861-6657555a", NAME="asm-diskk", OWNER="grid", GROUP="asmadmin",
MODE="0660"
[root@asm rules.d]#
4.6 创建并配置 UDEV Rules 文件
将刚才的输出部分黏贴到 Udev Rules 文件中。注意,一条 Udev 绑定语句是一行,中间不能换行。
[root@asm ~]# touch /etc/udev/rules.d/99-oracle-asmdevices.rules
添加如下内容:
KERNEL=="sd*",
BUS=="scsi",
PROGRAM=="/sbin/scsi_id
-g
-u
/dev/$name",
RESULT=="1ATA_VBOX_HARDDISK_VBe9df898c-735b4040", NAME="asm-diskb", OWNER="grid", GROUP="asmadmin",
MODE="0660"
KERNEL=="sd*",
BUS=="scsi",
PROGRAM=="/sbin/scsi_id
-g
-u
/dev/$name",
RESULT=="1ATA_VBOX_HARDDISK_VB903aa7fa-a19286f1", NAME="asm-diskc", OWNER="grid", GROUP="asmadmin",
MODE="0660"
KERNEL=="sd*",
BUS=="scsi",
PROGRAM=="/sbin/scsi_id
-g
-u
/dev/$name",
RESULT=="1ATA_VBOX_HARDDISK_VBa869eb45-b29c8bb9", NAME="asm-diskd", OWNER="grid", GROUP="asmadmin",
MODE="0660"
KERNEL=="sd*",
BUS=="scsi",
PROGRAM=="/sbin/scsi_id
-g
-u
/dev/$name",
RESULT=="1ATA_VBOX_HARDDISK_VB73166656-7cc465d2", NAME="asm-diske", OWNER="grid", GROUP="asmadmin",
MODE="0660"
KERNEL=="sd*",
BUS=="scsi",
PROGRAM=="/sbin/scsi_id
-g
-u
/dev/$name",
RESULT=="1ATA_VBOX_HARDDISK_VB49af6ff8-6efb851b", NAME="asm-diskf", OWNER="grid", GROUP="asmadmin",
MODE="0660"
KERNEL=="sd*",
BUS=="scsi",
PROGRAM=="/sbin/scsi_id
-g
-u
/dev/$name",
RESULT=="1ATA_VBOX_HARDDISK_VBd09a3069-a8d82bb9", NAME="asm-diskg", OWNER="grid", GROUP="asmadmin",
MODE="0660"
KERNEL=="sd*",
BUS=="scsi",
PROGRAM=="/sbin/scsi_id
-g
-u
/dev/$name",
RESULT=="1ATA_VBOX_HARDDISK_VBea49e0a7-9378b1f3", NAME="asm-diskh", OWNER="grid", GROUP="asmadmin",
MODE="0660"
KERNEL=="sd*",
BUS=="scsi",
PROGRAM=="/sbin/scsi_id
-g
-u
/dev/$name",
RESULT=="1ATA_VBOX_HARDDISK_VB16eeb940-e587763e", NAME="asm-diski", OWNER="grid", GROUP="asmadmin",
MODE="0660"
KERNEL=="sd*",
BUS=="scsi",
PROGRAM=="/sbin/scsi_id
-g
-u
/dev/$name",
RESULT=="1ATA_VBOX_HARDDISK_VB1e7df866-d4bf7961", NAME="asm-diskj", OWNER="grid", GROUP="asmadmin",
MODE="0660"
KERNEL=="sd*",
BUS=="scsi",
PROGRAM=="/sbin/scsi_id
-g
-u
/dev/$name",
RESULT=="1ATA_VBOX_HARDDISK_VBd341c861-6657555a", NAME="asm-diskk", OWNER="grid", GROUP="asmadmin",
MODE="0660"
4.7 重启 UDEV
[root@asm rules.d]# start_udev
Starting udev: [ OK ]
4.8 检查共享设备的所属关系和权限
[root@asm ~]# ll /dev/asm*
brw-rw---- 1 grid asmadmin 8, 17 May 13 19:36 /dev/asm-diskb
brw-rw---- 1 grid asmadmin 8, 32 May 13 19:36 /dev/asm-diskc
brw-rw---- 1 grid asmadmin 8, 48 May 13 19:36 /dev/asm-diskd
brw-rw---- 1 grid asmadmin 8, 64 May 13 19:36 /dev/asm-diske
brw-rw---- 1 grid asmadmin 8, 80 May 13 19:36 /dev/asm-diskf
brw-rw---- 1 grid asmadmin 8, 96 May 13 19:36 /dev/asm-diskg
brw-rw---- 1 grid asmadmin 8, 112 May 13 19:36 /dev/asm-diskh
brw-rw---- 1 grid asmadmin 8, 128 May 13 19:36 /dev/asm-diski
brw-rw---- 1 grid asmadmin 8, 144 May 13 19:36 /dev/asm-diskj
brw-rw---- 1 grid asmadmin 8, 160 May 13 19:36 /dev/asm-diskk
brw-rw---- 1 grid asmadmin 8, 176 May 13 19:36 /dev/asm-diskl
brw-rw---- 1 grid asmadmin 8, 192 May 13 19:36 /dev/asm-diskm
brw-rw---- 1 grid asmadmin 8, 208 May 13 19:36 /dev/asm-diskn
brw-rw---- 1 grid asmadmin 8, 224 May 13 19:36 /dev/asm-disko
brw-rw---- 1 grid asmadmin 8, 240 May 13 19:36 /dev/asm-diskp
[root@asm ~]#
至此配置结束,在 ASM 的配置中,ASM_DISKSTRING 参数指向'/dev/asm-disk*'就可以了。
5 禁用防火墙和 SELINUX
关闭防火墙:
service iptables status
service iptables stop
chkconfig iptables off
chkconfig iptables --list
设置/etc/selinux/config 文件,将 SELINUX 设置为 disabled。
=============================================================================
查看SELinux状态:
1、/usr/sbin/sestatus -v ##如果SELinux status参数为enabled即为开启状态
SELinux status: enabled
2、getenforce ##也可以用这个命令检查
关闭SELinux:
1、临时关闭(不用重启机器):
setenforce 0 ##设置SELinux 成为permissive模式
##setenforce 1 设置SELinux 成为enforcing模式
2、修改配置文件需要重启机器:
修改/etc/selinux/config 文件
将SELINUX=enforcing改为SELINUX=disabled
重启机器即可
=============================================================================
6 为安装用户设置资源限制
6.1 修改/etc/security/limits.conf
以 root 用户身份,在节点上,在 /etc/security/limits.conf 文件中添加如下内容,或者执行执行如下命令:
[root@rac1 ~]# vim /etc/security/limits.conf
添加下面部分
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
6.2 修改/etc/pam.d/login
在每个 在 /etc/pam.d/login 文件中添加或编辑下面一行内容:
[root@rac1 ~]# vim /etc/pam.d/login
session required pam_limits.so
6.3 shell 的限制
对默认的 shell 启动文件进行以下更改,以便更改所有 Oracle 安装所有者的 ulimit 设置:
[root@rac1 ~]# vim /etc/profile
if [ /$USER = "oracle" ] || [ /$USER = "grid" ]; then
if [ /$SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
6.4 修改/etc/sysctl.conf
#vim /etc/sysctl.conf
前面两个在文件中有,确认比下面列出的大就可以。
kernel.shmmax = 4294967295
kernel.shmall = 2097152
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048576
fs.aio-max-nr=1048576
使修改的参数生效:
[root@rac1 ~]# sysctl -p
这些值可以参考官方文档来进行查看。
7 安装 Clusterware
安装之前要确认/etc/hosts 配置:
[root@asm]# cat /etc/hosts
127.0.0.1 localhost asm
=============================================================================
我采用的这种方式
192.168.56.101 xkanasm
=============================================================================
不然会报错。
已 grid 用户运行安装程序: runInstaller
其他都是下一步的操作。 应该都没有问题了。
执行最后会让我们运行一个 root.sh 脚本。
[root@asm /]# sh /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@asm /]# sh /u01/app/grid/11.2.0/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/grid/11.2.0
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
To configure Grid Infrastructure for a Stand-Alone Server run the following command as the root user:
--如果是为 standb-alone,则执行如下脚本:
命令没有换行
/u01/app/grid/11.2.0/perl/bin/perl -I/u01/app/grid/11.2.0/perl/lib -I/u01/app/grid/11.2.0/crs/install /u01/app/grid/11.2.0/crs/install/roothas.pl
To configure Grid Infrastructure for a Cluster execute the following command:
--如果是为 Cluster,执行这个:
/u01/app/grid/11.2.0/crs/config/config.sh
This command launches the Grid Infrastructure Configuration Wizard. The wizard also supports silent operation, and the
parameters can be passed through the response file that is available in the installation media.
[root@asm /]#
注意:这里提示我们执行其他的脚本,我们按单节点执行:
[root@asm /]# /u01/app/grid/11.2.0/perl/bin/perl -I/u01/app/grid/11.2.0/perl/lib -I/u01/app/grid/11.2.0/crs/install /u01/app/grid/11.2.0/crs/install/roothas.pl
Using configuration parameter file: /u01/app/grid/11.2.0/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node asm successfully pinned.
Adding Clusterware entries to upstart
asm
2013/05/06 19:40:58
/u01/app/grid/11.2.0/cdata/asm/backup_20130506_194058.olr
Successfully configured Oracle Grid Infrastructure for a Standalone Server
[root@asm /]#
Oracle Restart 只能用于 not-cluster 的环境。 对于 Oracle RAC 环境,Oracle Clusterware 会提供 automatically restart 的功能。
对于非集群环境,只需要安装 Oracle Grid Infrastructure,在安装的时候选择“仅安装网格基础结构软件”,然后运行如下脚本来安装 Oracle Restart:
$GRID_HOME/crs/install/roothas.pl
root.sh 失败解决方法:
Occur "libcap.so.1: cannot open shared object file" during Install 11g GI when running root.sh
=============================================================================
我在运行root.sh之后要求运行的单节点脚本中遇到了这个问题,解决办法一样
【Environment for server and database】
【问题描述】
安装11g GI 软件结束最后步,执行root.sh时候遇到库文件无法加载
| occur "libcap.so.1: cannot open shared object file" during Install oracle Grid ifrastructure Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /home/grid/product/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /home/grid/product/11.2.0/grid/crs/install/crsconfig_params /home/grid/product/11.2.0/grid/bin/crsctl query crs activeversion ... failed rc=127 with message: /home/grid/product/11.2.0/grid/bin/crsctl.bin: error while loading shared libraries: libcap.so.1: cannot open shared object file: No such file or directory Improper Oracle Grid Infrastructure configuration found on this host Deconfigure the existing cluster configuration before starting to configure a new Grid Infrastructure run '/home/grid/product/11.2.0/grid/crs/install/roothas.pl -deconfig' to configure existing failed configuration and then rerun root.sh /home/grid/product/11.2.0/grid/perl/bin/perl -I/home/grid/product/11.2.0/grid/perl/lib -I /home/grid/product/11.2.0/grid/crs/install /home/grid/product/11.2.0/grid/crs/install/roothas.pl execution failed |
【排除过程】
1.查找是否已正确安装并生成了lib文件
| shadcdlora08:~ # rpm -qa|grep libcap libcap2-32bit-2.11-2.17.1 libcap2-2.11-2.17.1 |
2.很明显 libcap包已安装过了,为什么oracle不会自动去寻找呢,继续查找
| shadcdlora08:/ # find / -name libcap* /lib/libcap.so.2 /lib/libcap.so.2.11 /lib64/libcap.so.2 /lib64/libcap.so.2.11 /lib64/libcap.so.1.10 |
【注】很明显libcap.so.1 不存在
3.根据错误提示缺少libcap.so.1,为其建立一个软连接
| shadcdlora08:~ # cd /lib64 shadcdlora08:~ # ln -s libcap.so.1.10 libcap.so.1 shadcdlora08# ls -al libcap* lrwxrwxrwx 1 root root 14 Aug 1 17:22 libcap.so.1 -> libcap.so.1.10 -rwxr-xr-x 1 root root 14792 Feb 21 2009 libcap.so.1.10 lrwxrwxrwx 1 root root 14 Jul 17 23:53 libcap.so.2 -> libcap.so.2.11 -rwxr-xr-x 1 root root 19016 Nov 5 2011 libcap.so.2.11 |
4.强制重置配置11g GI ohas
【roothas.pl deconfig 见】http://blog.csdn.NET/wengtf/article/details/11484447
| # /home/grid/product/11.2.0/grid/crs/install/roothas.pl -deconfig -force Using configuration parameter file: /home/grid/product/11.2.0/grid/crs/install/crsconfig_params CRS-4639: Could not contact Oracle High Availability Services CRS-4000: Command Stop failed, or completed with errors. CRS-4639: Could not contact Oracle High Availability Services CRS-4000: Command Delete failed, or completed with errors. CRS-4544: Unable to connect to OHAS CRS-4000: Command Stop failed, or completed with errors. Failure in execution (rc=-1, 0, No such file or directory) for command /etc/init.d/ohasd deinstall Successfully deconfigured Oracle Restart stack |
5.重跑root.sh
| [root@stb CVU_11.2.0.3.0_grid]# /home/grid/product/11.2.0/grid/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /home/grid/product/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /home/grid/product/11.2.0/grid/crs/install/crsconfig_params LOCAL ADD MODE Creating OCR keys for user 'grid', privgrp 'dba'.. Operation successful. LOCAL ONLY MODE Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. CRS-4664: Node stb successfully pinned. Adding Clusterware entries to upstart stb 2013/04/24 16:36:29 /home/grid/product/11.2.0/grid/cdata/stb/backup_20130424_163629.olr Successfully configured Oracle Grid Infrastructure for a Standalone Server |
Finished!
=============================================================================
--检查 CRS 状态:
[grid@asm /]$ crs_stat -t
8 安装 Oracle 软件
以 Oracle 用户执行安装程序。
注意,选择只安装数据库软件。
然后都是下一步的操作。
执行脚本:
[root@asm software]# /u01/app/oracle/11.2.0/db_1/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/11.2.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
[root@asm software]#
=============================================================================
这部分,把用户 grid 添加到 dba 用户组的,我查看已经有了,没有执行
[root@asm software]# usermod -a -G dba grid
[root@asm software]# id grid
uid=1100(grid) gid=507(oinstall) groups=507(oinstall),502(dba),504(asmadmin),506(asmdba),505(asmoper)
-a 表示 append,一定要加。
=============================================================================
=============================================================================
最后安装检查的时候遇到两个错误
一个是NTF时钟错误,这里我们没有使用,所以可以忽略
另一个错误要求装 ksh 在网上查了一下,是oracle的提示错误,可以忽略。
忽略后安装成功了
=============================================================================
9 创建 ASM DISK GROUPS
用 grid 用户执行 asmca:Oracle 的文档建议创建 2 个组,一个 DATA,一个 FRA.
我们使用 ASMCA 的 OUI 界面来创建。
同样的办法,添加 FRA 磁盘组。
ASM disk group 选择都是 Normal,这时候会保留 2-way 的 mirror,也就是说,磁盘只有 1/2 的空间可用。
10 创建基于 ASM 的数据库实例
先运行 netmgr,创建数据库监听,已 oracle 用户运行 dbca,创建实例。
然后直接直接下一步下一步,就 ok 了。
--验证:
[oracle@asm ~]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.3.0 Production on Mon May 6 20:42:00 2013
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options
SQL> select * from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE
11.2.0.3.0
Production
TNS for Linux: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
SQL>
11 总结
我们这里创建的 ASM 的单实例,是使用 Clusterware 管理的,所以对这个单实例的操作,和 RAC 是完全一样的。
浙公网安备 33010602011771号