天天看點

oracle 10g rac重建crs

1.crs檔案已經損壞,删除crs資訊

rac10g1節點:

[root@rac10g01 install]# ./rootdelete.sh

Shutting down Oracle Cluster Ready Services (CRS):

Stopping resources.

Error while stopping resources. Possible cause: CRSD is down.

Stopping CSSD.

Shutting down CSS daemon.

Shutdown request successfully issued.

Shutdown has begun. The daemons should exit soon.

Checking to see if Oracle CRS stack is down...

Oracle CRS stack is not running.

Oracle CRS stack is down now.

Removing script for Oracle Cluster Ready services

Updating ocr file for downgrade

Cleaning up SCR settings in '/etc/oracle/scls_scr'

[root@rac10g01 install]# ./rootdeinstall.sh

Removing contents from OCR mirror device

2560+0 records in

2560+0 records out

Removing contents from OCR device

[root@rac10g01 install]#

rac10g2節點:

[root@rac10g02 install]# ./rootdelete.sh

2.安裝crs資訊

[root@rac10g01 install]# sh /u01/oracle/oraInventory/orainstRoot.sh

Changing permissions of /u01/oracle/oraInventory to 770.

Changing groupname of /u01/oracle/oraInventory to oinstall.

The execution of the script is complete

[root@rac10g01 install]# sh /u01/oracle/product/10.2.0.1/crs_1/root.sh

WARNING: directory '/u01/oracle/product/10.2.0.1' is not owned by root

WARNING: directory '/u01/oracle/product' is not owned by root

WARNING: directory '/u01/oracle' is not owned by root

WARNING: directory '/u01' is not owned by root

Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory

Setting up NS directories

Oracle Cluster Registry configuration upgraded successfully

Successfully accumulated necessary OCR keys.

Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

node <nodenumber>: <nodename> <private interconnect name> <hostname>

node 1: rac10g01 rac10g01-priv rac10g01

node 2: rac10g02 rac10g02-priv rac10g02

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

Now formatting voting device: /dev/raw/raw3

Format of 1 voting devices complete.

Startup will be queued to init within 90 seconds.

Adding daemons to inittab

Expecting the CRS daemons to be up within 600 seconds.

CSS is active on these nodes.

        rac10g01

CSS is inactive on these nodes.

        rac10g02

Local node checking complete.

Run root.sh on remaining nodes to start CRS daemons.

[root@rac10g01 install]# 

[root@rac10g02 ~]# sh /u01/oracle/oraInventory/orainstRoot.sh

[root@rac10g02 ~]# sh /u01/oracle/product/10.2.0.1/crs_1/root.sh

clscfg: EXISTING configuration version 3 detected.

clscfg: version 3 is 10G Release 2.

clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.

-force is destructive and will destroy any previous cluster

configuration.

Oracle Cluster Registry for cluster has already been initialized

/etc/profile: line 61: ulimit: open files: cannot modify limit: Operation not pe rmitted

CSS is active on all nodes.

Waiting for the Oracle CRSD and EVMD to start

Oracle CRS stack installed and running under init(1M)

Running vipca(silent) for configuring nodeapps

Error 0(Native: listNetInterfaces:[3])

  [Error 0(Native: listNetInterfaces:[3])]

[root@rac10g02 ~]#

3.驗證結果

[oracle@rac10g02 ~]$ crs_stat -t

Name           Type           Target    State     Host

------------------------------------------------------------

ora....g01.gsd application    ONLINE    ONLINE    rac10g01

ora....g01.ons application    ONLINE    ONLINE    rac10g01

ora....g01.vip application    ONLINE    ONLINE    rac10g01

ora....g02.gsd application    ONLINE    ONLINE    rac10g02

ora....g02.ons application    ONLINE    ONLINE    rac10g02

ora....g02.vip application    ONLINE    ONLINE    rac10g02

[oracle@rac10g02 ~]$

本文轉自 z597011036 51CTO部落格,原文連結:http://blog.51cto.com/tongcheng/1872338,如需轉載請自行聯系原作者