【故障處理】一次RAC故障處理過程
項目
source db
db 類型
2節點RAC
db version
11.2.0.1.0
db 存儲
ASM
OS版本及kernel版本
RHEL 6.6
晚上10點多,一個網友喊我幫忙處理RAC當機不能啟動的問題,并且告知涉及到多路徑和存儲的事。小麥苗對存儲一向不太懂,多路徑也沒怎麼接觸,自己也沒研究過這個東西。既然找到了我,那就不能不管啊,硬着頭皮上去看看。結果悲催了,搞了N個小時,求助了N個人,搞到第二天中午,終于搞定了,幸運的是第二天是周末,不用上班。小麥苗把處理過程記錄一下,希望我的處理過程可以幫到更多人。
剛開始上去看的時候,節點1的css不能啟動,報了一大堆的錯誤,節點2的ha也不能正常啟動。錯誤我忘記記錄了,反正是各種研究日志,各種查MOS,各種百度,各種Google,包括OCR的還原都試了,最後沒辦法了,隻有使用個人常用的絕招了,那就是。。。。。重新執行root.sh腳本。
關于該腳本的執行,我在個人部落格中有多次提到。不過還是得多練練,因為注意事項很多。首先,如果要保持磁盤組不被删除,那麼執行解除安裝指令($ORACLE_HOME/crs/install/rootcrs.pl -deconfig -force -verbose)可以加上-keepdg選項,但是11.2.0.1沒有該選項。在第二個節點上執行解除安裝的時候可以不用加-lastnode,盡可能多的保留資訊。
很幸運,小麥苗第一次執行後,叢集可以正常啟動了,一切安好,從10點熬到1點了吧。結果在準備導入OCR的備份的時候,需要以exec模式啟動CRS,結果又悲催了,叢集壞掉了。沒辦法,隻得重新開機,重新開機更悲催,OCR的盤找不到了。小麥苗想放棄了。盤找不到,我更沒辦法了。隻得找找懂存儲的人來弄了。差不多2點了。好吧,該休息了。
早上8點多,睜眼就趕緊登teamviewer,繼續處理。首先搗鼓了半天的多路徑。原來第二個節點的多路徑軟體有問題,自己就重新安裝了一下。安裝後期望能看到磁盤,結果還是不行。無奈,在leshami的群裡找找懂存儲的高手來。肖總幫我上去看了看弄好了存儲,找到了磁盤,萬分感謝。
接下來就繼續進行恢複操作,繼續deconfig,然後root.sh。執行完root.sh後發現叢集正常,自己嘗試重新開機了一下主機,一切正常,看來就是存儲搞得鬼。那就繼續恢複資料庫,這個是重點。由于整個操作過程都小心翼翼不敢動非OCR的盤,生怕資料搞丢了,因為10T的資料什麼備份都沒有,我也是醉了。用kfod看了一下磁盤,一切正常,好吧,那就接下來直接MOUNT磁盤組。重新執行root.sh後隻要磁盤組的磁盤檔案沒有損壞,那麼就可以直接MOUNT起來的。這也是在無備份情況下恢複OCR的一種辦法。
接下來一切都很順利,例如配置監聽,添加DB到srvctl管理器等,真是佛祖保佑。很多處理日志并沒有記錄,是以這裡隻能給出一些腳本了。
重新執行root.sh腳本特别需要注意的是資料庫的資料是否放在OCR磁盤組上。若放在OCR磁盤組上切記不能随意執行該腳本。
1、2個節點分别執行deconfig:
export ORACLE_HOME=/u01/app/11.2.0/grid
export PATH=$PATH:$ORACLE_HOME/bin
$ORACLE_HOME/crs/install/rootcrs.pl -deconfig -force -verbose
2、執行完後,需要對OCR盤進行dd,2個節點都執行:
dd if=/dev/zero of=/dev/oracleasm/disks/OCR_VOL2 bs=1024k count=1024
dd if=/dev/zero of=/dev/oracleasm/disks/OCR_VOL1 bs=1024k count=1024
3、節點1執行完後再在節點2執行:
$ORACLE_HOME/root.sh
另外,對于11.2.0.1版本執行root.sh有一個常見的bug錯誤:
CRS-4124: Oracle High Availability Services startup failed.
CRS-4000: Command Start failed, or completed with errors.
ohasd failed to start: Inappropriate ioctl for device
ohasd failed to start: Inappropriate ioctl for device at /u01/app/11.2.0/grid/crs/install/roothas.pl line 296.
該錯誤的解決辦法就是:
就是在執行root.sh之前執行以下指令
/bin/dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1
如果出現
/bin/dd: opening`/var/tmp/.oracle/npohasd': No such file or directory
的時候檔案說明相關檔案還沒生成,那就繼續執行,直到能執行為止,一般出現Adding daemon to inittab這條資訊的時候執行dd指令。
root.sh的一些配置放在如下的腳本中,包括要建立的OCR盤的名稱,磁盤路徑等:
$ORACLE_HOME/crs/config/config.sh
該指令可以顯示所有的磁盤資訊:
data01->export ORACLE_HOME=/u01/app/11.2.0/grid
data01->$ORACLE_HOME/bin/kfod disk=all s=true ds=true c=true
--------------------------------------------------------------------------------
Disk Size Header Path Disk Group User Group
================================================================================
1: 476837 Mb MEMBER /dev/oracleasm/disks/DATA_VOL1 DATA grid asmadmin
2: 953674 Mb MEMBER /dev/oracleasm/disks/DATA_VOL10 DATA grid asmadmin
3: 953674 Mb MEMBER /dev/oracleasm/disks/DATA_VOL11 DATA grid asmadmin
4: 953675 Mb MEMBER /dev/oracleasm/disks/DATA_VOL12 DATA grid asmadmin
5: 953674 Mb MEMBER /dev/oracleasm/disks/DATA_VOL13 DATA grid asmadmin
6: 953674 Mb MEMBER /dev/oracleasm/disks/DATA_VOL14 DATA grid asmadmin
7: 953674 Mb MEMBER /dev/oracleasm/disks/DATA_VOL15 DATA grid asmadmin
8: 953674 Mb MEMBER /dev/oracleasm/disks/DATA_VOL16 DATA grid asmadmin
9: 953675 Mb MEMBER /dev/oracleasm/disks/DATA_VOL18 DATA grid asmadmin
10: 953675 Mb MEMBER /dev/oracleasm/disks/DATA_VOL2 DATA grid asmadmin
11: 953674 Mb MEMBER /dev/oracleasm/disks/DATA_VOL3 DATA grid asmadmin
12: 953674 Mb MEMBER /dev/oracleasm/disks/DATA_VOL4 DATA grid asmadmin
13: 953675 Mb MEMBER /dev/oracleasm/disks/DATA_VOL5 DATA grid asmadmin
14: 953674 Mb MEMBER /dev/oracleasm/disks/DATA_VOL6 DATA grid asmadmin
15: 953674 Mb MEMBER /dev/oracleasm/disks/DATA_VOL7 DATA grid asmadmin
16: 953674 Mb MEMBER /dev/oracleasm/disks/DATA_VOL8 DATA grid asmadmin
17: 953675 Mb MEMBER /dev/oracleasm/disks/DATA_VOL9 DATA grid asmadmin
18: 476837 Mb MEMBER /dev/oracleasm/disks/FLASH_VOL1 FLASH grid asmadmin
19: 286103 Mb MEMBER /dev/oracleasm/disks/FLASH_VOL2 FLASH grid asmadmin
20: 286057 Mb MEMBER /dev/oracleasm/disks/OCR_VOL1 OCR grid asmadmin
21: 286102 Mb CANDIDATE /dev/oracleasm/disks/OCR_VOL2 # grid asmadmin
22: 476837 Mb MEMBER ORCL:DATA_VOL1 DATA <unknown> <unknown>
23: 953674 Mb MEMBER ORCL:DATA_VOL10 DATA <unknown> <unknown>
24: 953674 Mb MEMBER ORCL:DATA_VOL11 DATA <unknown> <unknown>
25: 953675 Mb MEMBER ORCL:DATA_VOL12 DATA <unknown> <unknown>
26: 953674 Mb MEMBER ORCL:DATA_VOL13 DATA <unknown> <unknown>
27: 953674 Mb MEMBER ORCL:DATA_VOL14 DATA <unknown> <unknown>
28: 953674 Mb MEMBER ORCL:DATA_VOL15 DATA <unknown> <unknown>
29: 953674 Mb MEMBER ORCL:DATA_VOL16 DATA <unknown> <unknown>
30: 953675 Mb MEMBER ORCL:DATA_VOL18 DATA <unknown> <unknown>
31: 953675 Mb MEMBER ORCL:DATA_VOL2 DATA <unknown> <unknown>
32: 953674 Mb MEMBER ORCL:DATA_VOL3 DATA <unknown> <unknown>
33: 953674 Mb MEMBER ORCL:DATA_VOL4 DATA <unknown> <unknown>
34: 953675 Mb MEMBER ORCL:DATA_VOL5 DATA <unknown> <unknown>
35: 953674 Mb MEMBER ORCL:DATA_VOL6 DATA <unknown> <unknown>
36: 953674 Mb MEMBER ORCL:DATA_VOL7 DATA <unknown> <unknown>
37: 953674 Mb MEMBER ORCL:DATA_VOL8 DATA <unknown> <unknown>
38: 953675 Mb MEMBER ORCL:DATA_VOL9 DATA <unknown> <unknown>
39: 476837 Mb MEMBER ORCL:FLASH_VOL1 FLASH <unknown> <unknown>
40: 286103 Mb MEMBER ORCL:FLASH_VOL2 FLASH <unknown> <unknown>
41: 286057 Mb MEMBER ORCL:OCR_VOL1 OCR <unknown> <unknown>
42: 286102 Mb CANDIDATE ORCL:OCR_VOL2 # <unknown> <unknown>
ORACLE_SID ORACLE_HOME HOST_NAME
+ASM1 /u01/app/11.2.0/grid data01
+ASM2 /u01/app/11.2.0/grid data02
data01->
data01->sqlplus / as sysasm
SQL*Plus: Release 11.2.0.1.0 Production on Sat Dec 10 12:27:25 2016
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
SQL>
SQL> alter diskgroup OCR ADD DISK '/dev/oracleasm/disks/OCR_VOL2';
Diskgroup altered.
11.2.0.1沒有-c參數,那就去掉,可以用-h檢視具體用法:
srvctl add database -d DGPHY -c RAC -o /oracle/app/oracle/product/11.2.0/db -p '+DATA/TESTDGPHY/PARAMETERFILE/spfiledgphy.ora' -r primary -n TESTDG
srvctl add instance -d DGPHY -i DGPHY1 -n ZFZHLHRDB1
srvctl add instance -d DGPHY -i DGPHY2 -n ZFZHLHRDB2
srvctl status database -d DGPHY
srvctl start database -d TESTDG
本文轉自lhrbest 51CTO部落格,原文連結:http://blog.51cto.com/lhrbest/1881547,如需轉載請自行聯系原作者