【故障处理】一次rac故障处理过程
项目
source db
db 类型
2节点rac
db version
11.2.0.1.0
db 存储
asm
os版本及kernel版本
rhel 6.6
晚上10点多,一个网友喊我帮忙处理rac宕机不能启动的问题,并且告知涉及到多路径和存储的事。小麦苗对存储一向不太懂,多路径也没怎么接触,自己也没研究过这个东西。既然找到了我,那就不能不管啊,硬着头皮上去看看。结果悲催了,搞了n个小时,求助了n个人,搞到第二天中午,终于搞定了,幸运的是第二天是周末,不用上班。小麦苗把处理过程记录一下,希望我的处理过程可以帮到更多人。
刚开始上去看的时候,节点1的css不能启动,报了一大堆的错误,节点2的ha也不能正常启动。错误我忘记记录了,反正是各种研究日志,各种查mos,各种百度,各种google,包括ocr的还原都试了,最后没办法了,只有使用个人常用的绝招了,那就是。。。。。重新执行root.sh脚本。
关于该脚本的执行,我在个人博客中有多次提到。不过还是得多练练,因为注意事项很多。首先,如果要保持磁盘组不被删除,那么执行卸载命令($oracle_home/crs/install/rootcrs.pl -deconfig -force -verbose)可以加上-keepdg选项,但是11.2.0.1没有该选项。在第二个节点上执行卸载的时候可以不用加-lastnode,尽可能多的保留信息。
很幸运,小麦苗第一次执行后,集群可以正常启动了,一切安好,从10点熬到1点了吧。结果在准备导入ocr的备份的时候,需要以exec模式启动crs,结果又悲催了,集群坏掉了。没办法,只得重启,重启更悲催,ocr的盘找不到了。小麦苗想放弃了。盘找不到,我更没办法了。只得找找懂存储的人来弄了。差不多2点了。好吧,该休息了。
早上8点多,睁眼就赶紧登teamviewer,继续处理。首先捣鼓了半天的多路径。原来第二个节点的多路径软件有问题,自己就重新安装了一下。安装后期望能看到磁盘,结果还是不行。无奈,在leshami的群里找找懂存储的高手来。肖总帮我上去看了看弄好了存储,找到了磁盘,万分感谢。
接下来就继续进行恢复操作,继续deconfig,然后root.sh。执行完root.sh后发现集群正常,自己尝试重启了一下主机,一切正常,看来就是存储搞得鬼。那就继续恢复数据库,这个是重点。由于整个操作过程都小心翼翼不敢动非ocr的盘,生怕数据搞丢了,因为10t的数据什么备份都没有,我也是醉了。用kfod看了一下磁盘,一切正常,好吧,那就接下来直接mount磁盘组。重新执行root.sh后只要磁盘组的磁盘文件没有损坏,那么就可以直接mount起来的。这也是在无备份情况下恢复ocr的一种办法。
接下来一切都很顺利,例如配置监听,添加db到srvctl管理器等,真是佛祖保佑。很多处理日志并没有记录,所以这里只能给出一些脚本了。
重新执行root.sh脚本特别需要注意的是数据库的数据是否放在ocr磁盘组上。若放在ocr磁盘组上切记不能随意执行该脚本。
1、2个节点分别执行deconfig:
export oracle_home=/u01/app/11.2.0/grid
export path=$path:$oracle_home/bin
$oracle_home/crs/install/rootcrs.pl -deconfig -force -verbose
2、执行完后,需要对ocr盘进行dd,2个节点都执行:
dd if=/dev/zero of=/dev/oracleasm/disks/ocr_vol2 bs=1024k count=1024
dd if=/dev/zero of=/dev/oracleasm/disks/ocr_vol1 bs=1024k count=1024
3、节点1执行完后再在节点2执行:
$oracle_home/root.sh
另外,对于11.2.0.1版本执行root.sh有一个常见的bug错误:
crs-4124: oracle high availability services startup failed.
crs-4000: command start failed, or completed with errors.
ohasd failed to start: inappropriate ioctl for device
ohasd failed to start: inappropriate ioctl for device at /u01/app/11.2.0/grid/crs/install/roothas.pl line 296.
该错误的解决办法就是:
就是在执行root.sh之前执行以下命令
/bin/dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1
如果出现
/bin/dd: opening`/var/tmp/.oracle/npohasd': no such file or directory
的时候文件说明相关文件还没生成,那就继续执行,直到能执行为止,一般出现adding daemon to inittab这条信息的时候执行dd命令。
root.sh的一些配置放在如下的脚本中,包括要创建的ocr盘的名称,磁盘路径等:
$oracle_home/crs/config/config.sh
该命令可以显示所有的磁盘信息:
data01->export oracle_home=/u01/app/11.2.0/grid
data01->$oracle_home/bin/kfod disk=all s=true ds=true c=true
--------------------------------------------------------------------------------
disk size header path disk group user group
================================================================================
1: 476837 mb member /dev/oracleasm/disks/data_vol1 data grid asmadmin
2: 953674 mb member /dev/oracleasm/disks/data_vol10 data grid asmadmin
3: 953674 mb member /dev/oracleasm/disks/data_vol11 data grid asmadmin
4: 953675 mb member /dev/oracleasm/disks/data_vol12 data grid asmadmin
5: 953674 mb member /dev/oracleasm/disks/data_vol13 data grid asmadmin
6: 953674 mb member /dev/oracleasm/disks/data_vol14 data grid asmadmin
7: 953674 mb member /dev/oracleasm/disks/data_vol15 data grid asmadmin
8: 953674 mb member /dev/oracleasm/disks/data_vol16 data grid asmadmin
9: 953675 mb member /dev/oracleasm/disks/data_vol18 data grid asmadmin
10: 953675 mb member /dev/oracleasm/disks/data_vol2 data grid asmadmin
11: 953674 mb member /dev/oracleasm/disks/data_vol3 data grid asmadmin
12: 953674 mb member /dev/oracleasm/disks/data_vol4 data grid asmadmin
13: 953675 mb member /dev/oracleasm/disks/data_vol5 data grid asmadmin
14: 953674 mb member /dev/oracleasm/disks/data_vol6 data grid asmadmin
15: 953674 mb member /dev/oracleasm/disks/data_vol7 data grid asmadmin
16: 953674 mb member /dev/oracleasm/disks/data_vol8 data grid asmadmin
17: 953675 mb member /dev/oracleasm/disks/data_vol9 data grid asmadmin
18: 476837 mb member /dev/oracleasm/disks/flash_vol1 flash grid asmadmin
19: 286103 mb member /dev/oracleasm/disks/flash_vol2 flash grid asmadmin
20: 286057 mb member /dev/oracleasm/disks/ocr_vol1 ocr grid asmadmin
21: 286102 mb candidate /dev/oracleasm/disks/ocr_vol2 # grid asmadmin
22: 476837 mb member orcl:data_vol1 data <unknown> <unknown>
23: 953674 mb member orcl:data_vol10 data <unknown> <unknown>
24: 953674 mb member orcl:data_vol11 data <unknown> <unknown>
25: 953675 mb member orcl:data_vol12 data <unknown> <unknown>
26: 953674 mb member orcl:data_vol13 data <unknown> <unknown>
27: 953674 mb member orcl:data_vol14 data <unknown> <unknown>
28: 953674 mb member orcl:data_vol15 data <unknown> <unknown>
29: 953674 mb member orcl:data_vol16 data <unknown> <unknown>
30: 953675 mb member orcl:data_vol18 data <unknown> <unknown>
31: 953675 mb member orcl:data_vol2 data <unknown> <unknown>
32: 953674 mb member orcl:data_vol3 data <unknown> <unknown>
33: 953674 mb member orcl:data_vol4 data <unknown> <unknown>
34: 953675 mb member orcl:data_vol5 data <unknown> <unknown>
35: 953674 mb member orcl:data_vol6 data <unknown> <unknown>
36: 953674 mb member orcl:data_vol7 data <unknown> <unknown>
37: 953674 mb member orcl:data_vol8 data <unknown> <unknown>
38: 953675 mb member orcl:data_vol9 data <unknown> <unknown>
39: 476837 mb member orcl:flash_vol1 flash <unknown> <unknown>
40: 286103 mb member orcl:flash_vol2 flash <unknown> <unknown>
41: 286057 mb member orcl:ocr_vol1 ocr <unknown> <unknown>
42: 286102 mb candidate orcl:ocr_vol2 # <unknown> <unknown>
oracle_sid oracle_home host_name
+asm1 /u01/app/11.2.0/grid data01
+asm2 /u01/app/11.2.0/grid data02
data01->
data01->sqlplus / as sysasm
sql*plus: release 11.2.0.1.0 production on sat dec 10 12:27:25 2016
copyright (c) 1982, 2009, oracle. all rights reserved.
connected to:
oracle database 11g enterprise edition release 11.2.0.1.0 - 64bit production
with the real application clusters and automatic storage management options
sql>
sql> alter diskgroup ocr add disk '/dev/oracleasm/disks/ocr_vol2';
diskgroup altered.
11.2.0.1没有-c参数,那就去掉,可以用-h查看具体用法:
srvctl add database -d dgphy -c rac -o /oracle/app/oracle/product/11.2.0/db -p '+data/testdgphy/parameterfile/spfiledgphy.ora' -r primary -n testdg
srvctl add instance -d dgphy -i dgphy1 -n zfzhlhrdb1
srvctl add instance -d dgphy -i dgphy2 -n zfzhlhrdb2
srvctl status database -d dgphy
srvctl start database -d testdg
about me ............................................................................................................................... ● 本文作者:小麦苗,只专注于数据库的技术,更注重技术的运用 ● 本文在![]()
【故障处理】一次RAC故障处理过程