天天看點

HP-UX雙機下使用cmquerycl遇到的問題和解決方法

在配置雙機時,先進行雙機環境檢查:

# cmquerycl -n hlrdb1 -n hlrdb2 -v -C /etc/cmcluster/clhlrdb.ascii

Begin checking the nodes...

Warning: Unable to determine local domain name for hlrdb1

Looking for other clusters ... Done

Gathering configuration information ..

Gathering storage information ..

Found 10 devices on node hlrdb1

Found 16 devices on node hlrdb2

Analysis of 26 devices should take approximately 4 seconds

0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%

Found 3 volume groups on node hlrdb1

Found 3 volume groups on node hlrdb2

Analysis of 6 volume groups should take approximately 1 seconds

0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%

.....

Gathering Network Configuration ....... Done

Note: Disks were discovered which are not in use by either LVM or VxVM.

      Use pvcreate(1M) to initialize a disk for LVM or,

      use vxdiskadm(1M) to initialize a disk for VxVM.

Warning: Volume group /dev/vg00 is configured differently on node hlrdb1 than on

 node hlrdb2

Error: Volume group /dev/vg00 on node hlrdb2 does not appear to have a physical

volume corresponding to /dev/dsk/c2t1d0 on node hlrdb1 (24142709971117077841).

Warning: Volume group /dev/vg00 is configured differently on node hlrdb2 than on

 node hlrdb1

Error: Volume group /dev/vg00 on node hlrdb1 does not appear to have a physical

volume corresponding to /dev/dsk/c2t1d0 on node hlrdb2 (24142709991117033934).

Warning: The volume group /dev/vg00 is activated on more than one node:

hlrdb1

hlrdb2

Warning: Volume groups should not be activated on more than one node.

Use vgchange to de-activate a volume group on a node.

Failed to gather configuration information.

至此雙機檢查沒有通過,無法繼續配置。

Google的結果,網上文章 如下:

http://forums1.itrc.hp.com/service/foru...threadId=657218

摘 錄一下原因:

When a volume group is created, it is given a unique VGID - a merger of the

servers' machine ID (uname -i) and the timestamp of the VG creation date. To

save time, administrators may have used dd or copyutil to clone

vg00 onto another servers' disks. Unfortunately, this also copies the same

VGID to the new server.

MC/ServiceGuard utilizes LVM structures such as Volume Group ID (VGID) and

Physical Volume ID (PVID) to determine which VGs are shared (common to both

servers). If each servers' vg00s' PVID and VGID are the same, cmquerycl

(ServiceGuard) considers them to be the same VG. Subsequent alterations of

vg00 LVM structures (such as adding a disk) are interpretted by cmquerycl as

unresolvable LVM differences between servers - terminating the command.

大概的意思是 雙機以VGID和PVID來确定vg是否是共享盤;在雙機的vg00所在硬碟克隆的情況下,由于PVID一緻,就會出現問題。

解決方法如 下:

0. 執行lvlnboot -v /dev/vg00,儲存執行結果

1. 啟動到LVM維護模式

   #shutdown -ry 0

   ISL>hpux -lm

2. export vg00,并儲存配置

   vgexport -v -m vg00.map /dev/vg00

3. 修改vgid,如果已經作了鏡像,兩塊盤要同時指定

   vgchgid -f /dev/rdsk/c2t0d0 /dev/rdsk/c2t1d0

   輸入‘y'

4. 重新import vg

   mkdir /dev/vg00

   mknod /dev/vg00/group c 64 0x000000

   vgimport -v -m vg00.map /dev/vg00 /dev/rdsk/c2t0d0 /dev/rdsk/c2t1d0

5. 修改LVM boot區

   vgchange -a y vg00

   lvlnboot -b /dev/vg00/lvol1

   lvlnboot -r /dev/vg00/lvol3

   lvlnboot -s /dev/vg00/lvol2

   lvlnboot -d /dev/vg00/lvol2

   lvlnboot -R /dev/vg00

   執行lvlnboot -v /dev/vg00,與第0步執行結果比較,應該相同

6. 修改LVM預設的boot指令

   mount /usr

   lifcp /dev/rdsk/c2t0d0:AUTO -

   lifcp /dev/rdsk/c2t0d0:AUTO -

   上面兩條指令的輸出結果如果不是“hpux”,則執行下面的指令修改

   mkboot -a "hpux" /dev/rdsk/c2tYd0

7. 重新開機

   shutdown -ry 0

繼續閱讀