天天看點

RedHat EL5 安裝Oracle 10g RAC之--CRS 安裝

redhat el5 安裝oracle 10g rac之--crs 安裝

系統環境:

作業系統:redhat el5

cluster: oracle crs 10.2.0.1.0

oracle:  oracle 10.2.0.1.0

如圖所示:rac 系統架構

二、crs 安裝

  cluster ready service是oracle 建構rac,負責叢集資源管理的軟體,在搭建rac中必須首先安裝.

安裝需采用圖形化方式,以oracle使用者的身份安裝(在node1上):

注意:修改安裝配置檔案,增加redhat-5的支援

[oracle@node1 install]$ pwd

/home/oracle/cluster/install

[oracle@node1 install]$ ls

addlangs.sh  images   oneclick.properties  oraparamsilent.ini  response

addnode.sh   lsnodes  oraparam.ini         resource            unzip

[oracle@node1 install]$ vi oraparam.ini

[certified versions]

linux=redhat-3,suse-9,redhat-4,redhat-5,unitedlinux-1.0,asianux-1,asianux-2

[oracle@node1 cluster]$./runinstaller

歡迎界面

注意安裝crs的主目錄,不能和oracle軟體的目錄一緻,需單獨在另一個目錄

[oracle@node1 ~]$ ls -l /u01

total 24

drwxr-xr-x  3 oracle oinstall  4096 may  5 17:04 app

drwxr-xr-x 36 oracle   oinstall  4096 may  7 11:08 crs_1

drwx------  2 oracle oinstall 16384 may  4 15:59 lost+found

[oracle@node1 ~]$

添加節點(如果主機間信任關系配置有問題,這裡就無法發現node 2)

修改public 網卡屬性(public 網卡用于和client 通訊)

ocr必須采用raw裝置(exteneral redundancy隻需一個raw,安裝後可以添加mirror)

vote disk必須采用raw裝置(exteneral redundancy隻需一個raw,安裝後可以添加多個raw構成備援)

開始安裝(并将安裝軟體傳送到node2)

安裝提示分别在兩個節點按順序執行script

node1:

[root@node1 ~]# /u01/app/oracle/orainventory/orainstroot.sh

changing permissions of /u01/app/oracle/orainventory to 770.

changing groupname of /u01/app/oracle/orainventory to oinstall.

the execution of the script is complete

node2:

[root@node2 ~]# /u01/app/oracle/orainventory/orainstroot.sh

[root@node1 ~]# /u01/crs_1/root.sh

warning: directory ‘/u01‘ is not owned by root

checking to see if oracle crs stack is already configured

/etc/oracle does not exist. creating it now.

setting the permissions on ocr backup directory

setting up ns directories

oracle cluster registry configuration upgraded successfully

assigning default hostname node1 for node 1.

assigning default hostname node2 for node 2.

successfully accumulated necessary ocr keys.

using ports: css=49895 crs=49896 evmc=49898 and evmr=49897.

node <nodenumber>: <nodename> <private interconnect name> <hostname>

node 1: node1 node1-priv node1

node 2: node2 node2-priv node2

creating ocr keys for user ‘root‘, privgrp ‘root‘..

operation successful.

now formatting voting device: /dev/raw/raw2

format of 1 voting devices complete.

startup will be queued to init within 90 seconds.

adding daemons to inittab

expecting the crs daemons to be up within 600 seconds.

css is active on these nodes.

       node1

css is inactive on these nodes.

       node2

local node checking complete.

run root.sh on remaining nodes to start crs daemons.

node1執行成功!

[root@node2 ~]# /u01/crs_1/root.sh

clscfg: existing configuration version 3 detected.

clscfg: version 3 is 10g release 2.

clscfg: arguments check out successfully.

no keys were written. supply -force parameter to override.

-force is destructive and will destroy any previous cluster

configuration.

oracle cluster registry for cluster has already been initialized

css is active on all nodes.

waiting for the oracle crsd and evmd to start

oracle crs stack installed and running under init(1m)

running vipca(silent) for configuring nodeapps

/u01/crs_1/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: no such file or directory

出現以上錯誤,解決方法:

[root@node2 bin]# vi vipca

linux) ld_library_path=$oracle_home/lib:$oracle_home/srvm/lib:$ld_library_path

      export ld_library_path

      #remove this workaround when the bug 3937317 is fixed

      arch=`uname -m`

      if [ "$arch" = "i686" -o "$arch" = "ia64" ]

      then

           ld_assume_kernel=2.4.19

           export ld_assume_kernel

      fi

unset ld_assume_kernel (添加此行資訊)

      #end workaround

[root@node2 bin]# vi srvctl

ld_assume_kernel=2.4.19

export ld_assume_kernel

unset ld_assume_kernel(添加此行資訊)

在node 2重新執行root.sh:

注意:root.sh隻能執行一次,如果再次執行,需執行rootdelete.sh

[root@node2 bin]# /u01/crs_1/root.sh

oracle crs stack is already configured and will be running under init(1m)

[root@node2 bin]# cd ../install

[root@node2 install]# ls

cluster.ini         install.incl   rootaddnode.sbs    rootdelete.sh  templocal

cmdllroot.sh        make.log       rootconfig         rootinstall

envvars.properties  paramfile.crs  rootdeinstall.sh   rootlocaladd

install.excl        preupdate.sh   rootdeletenode.sh  rootupgrade

[root@node2 install]# ./rootdelete.sh

crs-0210: could not find resource ‘ora.node2.listener_node2.lsnr‘.

crs-0210: could not find resource ‘ora.node2.ons‘.

crs-0210: could not find resource ‘ora.node2.vip‘.

crs-0210: could not find resource ‘ora.node2.gsd‘.

shutting down oracle cluster ready services (crs):

stopping resources.

successfully stopped crs resources

stopping cssd.

shutting down css daemon.

shutdown request successfully issued.

shutdown has begun. the daemons should exit soon.

checking to see if oracle crs stack is down...

oracle crs stack is not running.

oracle crs stack is down now.

removing script for oracle cluster ready services

updating ocr file for downgrade

cleaning up scr settings in ‘/etc/oracle/scls_scr‘

[root@node2 install]#

node 2 再次出錯:

[root@node2 install]# /u01/crs_1/root.sh

error 0(native: listnetinterfaces:[3])

 [error 0(native: listnetinterfaces:[3])]

解決方法:(配置網絡)

[root@node2 bin]# ./oifcfg iflist

eth0  192.168.8.0

eth1  10.10.10.0

[root@node2 bin]# ./oifcfg getif

[root@node2 bin]# ./oifcfg setif -global eth0/192.168.8.0:public

[root@node2 bin]# ./oifcfg setif -global eth1/10.10.10.0:cluster_interconnect

eth0  192.168.8.0  global  public

eth1  10.10.10.0  global  cluster_interconnect

并在node2上執行vipca:

以root身份執行vipca(在/u01/crs_1/bin)

配置資訊應和/etc/hosts檔案一緻

開始配置

vipca配置成功後,crs服務正常工作

安裝完成!

驗證crs:

[root@node2 bin]# crs_stat -t

name           type           target    state     host        

------------------------------------------------------------

ora.node1.gsd  application    online    online    node1      

ora.node1.ons  application    online    online    node1      

ora.node1.vip  application    online    online    node1      

ora.node2.gsd  application    online    online    node2      

ora.node2.ons  application    online    online    node2      

ora.node2.vip  application    online    online    node2      

[root@node1 ~]# crs_stat -t

附:錯誤案例

如果在運作root.sh時出現以下錯誤:

在出現錯誤的節點上運作(root)vipca 解決!

@至此crs安裝成功!

本文出自 “” 部落格,請務必保留此出處