天天看點

hana叢集配置

1.1 配置主機名

分别在四個節點(linux001 linux002 linux003 linux004)上編輯/etc/hosts檔案【每個節點内容一緻】

1: [[email protected] ~]# vi /etc/hosts

2: # Do not remove the following line, or various programs

3: # that require network functionality will fail.

4: 127.0.0.1 linux001 localhost.localdomain localhost

5: ::1 localhost6.localdomain6 localhost6i 172.23.176.103 linux001

6: 172.23.176.101 linux002

7: 172.23.176.104 linux003

8: 172.23.176.102 linux004

其中172.23.176.103 linux001;127.23.176.101 linux002;127.23.176.104 linux003;127.23.176.102 linux004.為所添加的内容,在每天伺服器上執行同樣的操作。編輯完以後安裝Esc鍵後按shift +:建後輸入wq!儲存退出。

1.2配置ssh互信

以linux001和linux002為例。首先配置linux002免密碼登入linux001.在linux001上執行:

1: [[email protected]]$ >ssh-keygen -t dsa

2: Generating public/private dsa key pair.

3: Enter file in which to save the key (/home/oracle/.ssh/id_dsa):

4: Created directory ‘/home/oracle/.ssh’.

5: Enter passphrase (empty for no passphrase):

6: Enter same passphrase again:

7: Your identification has been saved in /home/oracle/.ssh/id_dsa.

8: Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.

9: [[email protected]]>cd /root/.ssh

10: [[email protected]]>mv id_dsa.pub linux001.pub

11: [[email protected]]>scp linux001.pub [email protected]:/root/.ssh

12: 在linux002上執行

13: [[email protected]]> cd /root/.ssh

14: [[email protected]]cat linux001.pub >> authorized_keys

15: [[email protected]]chmod 600 authorized_keys

16: [[email protected]]cd …

17: [[email protected]] chmod 700 .ssh

配置linux001免密碼登入linux002,在linux002上執行:

1: [[email protected]]$ >ssh-keygen -t dsa

2: Generating public/private dsa key pair.

3: Enter file in which to save the key (/home/oracle/.ssh/id_dsa):

4: Created directory ‘/home/oracle/.ssh’.

5: Enter passphrase (empty for no passphrase):

6: Enter same passphrase again:

7: Your identification has been saved in /home/oracle/.ssh/id_dsa.

8: Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.

9: [[email protected]]>cd /root/.ssh

10: [[email protected]]>mv id_dsa.pub linux001.pub

11: [[email protected]]>scp linux001.pub [email protected]:/root/.ssh

在linux001上執行

1: [[email protected]]> cd /root/.ssh

2: [[email protected]]cat linux001.pub >> authorized_keys

3: [[email protected]]chmod 600 authorized_keys

4: [[email protected]]cd …

5: [[email protected]] chmod 700 .ssh

其它機器linux001和linux003互相之間,linux001和linux004互相之間,linux002和linux004互相之間,linux002和linux003互相之間,linux003和linux004互相之間配置方法以此類似(一共需要配置12次)。

2 多路徑配置

2.1 相關軟體包

所需相關軟體包

device-mapper-1.02.13-6.9.i586.rpm

該軟體運作在底層,主要進行裝置虛拟化和映射。

multipath-tools-0.4.7-34.18.i586.rpm,這個軟體包可以在

多路徑的管理和監控工具,主要進行路徑狀态的檢測,管理工作

檢視軟體是否安裝,如果沒有則安裝。(所有4台機器上操作)

[[email protected]] rpm -qa | grep device-mapper(檢視device-mapper是否安裝,如果沒有任何提示,則通過zypper install 軟體名安裝)

[[email protected]] rpm -qa | grep multipath-tool(檢視device-mapper是否安裝,如果沒有任何提示,則通過zypper install 軟體名安裝)

安裝指令:

[[email protected]] zypper install device-mapper

[[email protected]] zypper install multipath-tool

如果多路徑子產品沒有加載成功請使用下列命初始化DM,或重新開機系統

1: —Use the following commands to initialize and start DM for the first time:

2: [[email protected]] modprobe dm-multipath

3: [[email protected]] modprobe dm-round-robin

4: [[email protected]] service multipathd start

5: [[email protected]] multipath –v2

2.2 檢視wwid号和狀态。

1: linux001:/ # multipath -ll(任意一節點)

2: Mpath1 (36000d31000eea500000000000000000008 ) dm-11 COMPELNT compelnet ,vol

3: [size=8.0T][features=1 queue_if_no_path][hwhandler=0] wp=rw

4: _ round-robin 0 [prio=1 ] status=active

5: _ 3:0:5:6 sdh 8:112 [active][undef]

6: _ 4:0:7:6 sdn 8:208 [active][undef]

7: _ 4:0:7:6 sdw 65:96 [active][undef]

8: _ 4:0:7:6 sdz 65:114 [active][undef]

9:

10: Mpath2 (36000d31000eea500000000000000000003 ) dm-6 COMPELNT compelnet ,vol

11: [size=2.0T][features=1 queue_if_no_path][hwhandler=0] wp=rw

12: _ round-robin 0 [prio=1 ] status=active

13: _ 3:0:4:1sdc 8:32 [active][undef]

14: _ 4:0:6:1 sdi 8:128 [active][undef]

15: _ 4:0:4:1 sdo 8:224 [active][undef]

16: _ 4:0:5:1 sdr 65:16 [active][undef]

17:

18: Mpath3 (36000d31000eea500000000000000000007) dm-8 COMPELNT compelnet ,vol

19: [size=500G][features=1 queue_if_no_path][hwhandler=0] wp=rw

20: _ round-robin 0 [prio=1 ] status=active

21: _ 3:0:4:5 sde 8:64 [active][undef]

22: _ 4:0:6:5 sdk 8:160 [active][undef]

23: _ 4:0:4:5 sdq 65:0 [active][undef]

24: _ 4:0:5:5 sdt 65:48 [active][undef]

為了使多鍊路下的僞裝置名(mpath1,mpath2,mpath3)更有意義我們可以通過修改相關配置檔案進行配置。這裡我們需要記住每個多鍊路路徑下的僞裝置的wwid。号如mpath1中後面括号中的 36000d31000eea500000000000000000008。另外我們發現在裝置名稱後的狀态為activ undef則說明相關裝置還沒有激活。

2.3 配置檔案的修改及建立

建立一個multipath.conf的配置檔案,該檔案在安裝後不會自動建立。不過有一個模闆可以使用,使用如下指令可以建立一個multipath.conf的檔案cp/usr/share/doc/packages/multipath-tools/multipath.conf.synthetic /etc/multipath.conf(把系統中的預設帶的檔案multipath.conf.synthetic拷貝到/etc/multipath.conf裡,生産新的配置檔案multipath.conf)

修改/etc/multipath.conf檔案

1: vi /etc/multipath.conf

2: blacklist {

3: devnode “^sda”

4: devnode “^sdb”

5: } 添加黑名單排除本地磁盤sda,sdb

6: defaults {

7: user_friendly_names yes

8: } 修改user_friendly_names為 yes

9: multipaths {

10: multipath {

11: wwid 36000d31000eea500000000000000000008(mapth1的wwid)

12: alias hana-8.0T

13: path_grouping_policymultibus

14: path_selector"round-robin 0"

15: failbackmanual

16: rr_weightpriorities

17: no_path_retry5

18: rr_min_io100

19: }

20: multipath {

21: wwid6000d31000eea500000000000000000003 (mapth2的wwid)

22: aliashana-2.0T

23: }

24: multipath {

25: Wwid 36000d31000eea500000000000000000007(math3的wwid)

26: aliashana-500G

27: }

28:

29: }

修改過的配置檔案我們可以通過scp的方式把配置檔案multipath.conf複制到其他3台伺服器上。指令如下所示:

1: [[email protected]]#scp /etc/multipath.conf [email protected]:/etc/multipath.conf

2: [[email protected]]#scp /etc/multipath.conf [email protected]:/etc/multipath.conf

3: [[email protected]]#scp /etc/multipath.conf [email protected]:/etc/multipath.conf

2.4 重新開機multipathd服務。

重新開機服務(每個節點)

1: [[email protected]]# /etc/init.d/multipathd stop

2: Stopping multipathd daemon: [ OK ]

3: [[email protected]]# /etc/init.d/multipathd start

4: Starting multipathd daemon: [ OK ] -----提示OK 正常開啟服務

5: 開機自動運作(2,3,5運作級别)(每個節點)

6: [[email protected]]#chkconfig --level 235 multipathd on

2.5 檢視multipath狀态

1: linux001:/ # multipath -ll (每個節點上面執行)

2: Mpath1 (36000d31000eea500000000000000000008 ) dm-11 COMPELNT compelnet ,vol

3: [size=8.0T][features=1 queue_if_no_path][hwhandler=0] wp=rw

4: _ round-robin 0 [prio=1 ] status=active

5: _ 3:0:5:6 sdh 8:112 active ready running

6: _ 4:0:7:6 sdn 8:208 active ready running

7: _ 4:0:7:6 sdw 65:96 active ready running

8: _ 4:0:7:6 sdz 65:114 active ready running

9:

10: Mpath2(36000d31000eea500000000000000000003 ) dm-6 COMPELNT compelnet ,vol

11: [size=2.0T][features=1 queue_if_no_path][hwhandler=0] wp=rw

12: _ round-robin 0 [prio=1 ] status=active

13: _ 3:0:4:1sdc 8:32 active ready running

14: _ 4:0:6:1 sdi 8:128 active ready running

15: _ 4:0:4:1 sdo 8:224 active ready running

16: _ 4:0:5:1 sdr 65:16 active ready running

17:

18: Mpath3 (36000d31000eea500000000000000000007) dm-8 COMPELNT compelnet ,vol

19: [size=500G][features=1 queue_if_no_path][hwhandler=0] wp=rw

20: _ round-robin 0 [prio=1 ] status=active

21: _ 3:0:4:5 sde 8:64 active ready running

22: _ 4:0:6:5 sdk 8:160 active ready running

23: _ 4:0:4:5 sdq 65:0 active ready running

24: _ 4:0:5:5 sdt 65:48 active ready running

當看到如上所示:active ready running,則代表多鍊路命名已經配置成功了。

3 Ocfs2相關配置

3.1 相關軟體包

Ocfs2-tools-1.6.4.0.3.5

Ocfs2-dmp-default-1.6_3.0.13_0.27-0.5.84

Ocfs2-tools-o2cb-1.6.4.0.3.5

Ocfs2console-1.6.4.0.3.5

檢視軟體是否安裝,如果沒有則安裝。(所有4台機器上操作)

1: [[email protected]] rpm -qa | grep ocfs2-tools

2: [[email protected]] rpm -qa | grep ocfs2-dmp-default

3: [[email protected]] rpm -qa | grep ocfs2-tools-o2cb

4: [[email protected]] rpm -qa | grep ocfs2-console

5: 安裝指令:

6: [[email protected]] zypper install device-mapper

7: [[email protected]] zypper install multipath-tool

配置節點(任意一台節點)

ocfs2console --> Cluster --> Node Configuration–>把每個節點都添加上,名字就是Hostname, IP部分添私網IP

同步節點.(任意一台節點)

ocfs2console --> Cluster --> Progagate Cluster Configuration 同步linux001上的配置到linux002 linux003 linux004上去。

1: [root@ linux001]# cat /etc/ocfs2/cluster.conf

2: node:

3: ip_port = 7777

4: ip_address = 172.23.176.103

5: number = 0

6: name = linux001

7: cluster = ocfs2

8:

9: node:

10: ip_port = 7777

11: ip_address = 172.23.176.101

12: number = 1

13: name = linux002

14: cluster = ocfs2

15: node:

16: ip_port = 7777

17: ip_address = 172.23.176.103

18: number = 2

19: name = linux003

20: cluster = ocfs2

21:

22: node:

23: ip_port = 7777

24: ip_address = 172.23.176.102

25: number = 3

26: name = linux004

27: cluster = ocfs2

28: cluster:

29: node_count = 4

30: name = ocfs2

3.3 配置02cb

在每個節點上執行:

1: [[email protected]]# /etc/init.d/o2cb configure

2: Configuring the O2CB driver.

3: This will configure the on-boot properties of the O2CB driver.

4: The following questions will determine whether the driver is loaded on

5: boot. The current values will be shown in brackets (’[]’). Hitting

6: without typing an answer will keep that current value. Ctrl-C

7: will abort.

8: Load O2CB driver on boot (y/n) [n]: y

9: Cluster stack backing O2CB [o2cb]:

10: Cluster to start on boot (Enter “none” to clear) [ocfs2]:

11: Specify heartbeat dead threshold (>=7) [31]:

12: Specify network idle timeout in ms (>=5000) [30000]:

13: Specify network keepalive delay in ms (>=1000) [2000]:

14: Specify network reconnect delay in ms (>=2000) [2000]:

15: Writing O2CB configuration: OK

16: Loading filesystem “configfs”: OK

17: Mounting configfs filesystem at /sys/kernel/config: OK

18: Loading filesystem “ocfs2_dlmfs”: OK

19: Creating directory ‘/dlm’: OK

20: Mounting ocfs2_dlmfs filesystem at /dlm: OK

21: Checking O2CB cluster configuration : Failed

3.4 檔案系統分區

1: [[email protected]]# fdisk /dev/mapper/hana-500G

2: Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

3: Building a new DOS disklabel. Changes will remain in memory only,

4: until you decide to write them. After that, of course, the previous

5: content won’t be recoverable.

6:

7: Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

8:

9: Command (m for help): n

10: Command action

11: e extended

12: p primary partition (1-4)

13: p

14: Partition number (1-4): 1

15: First cylinder (1-261, default 1):

16: Using default value 1

17: Last cylinder or +size or +sizeM or +sizeK (1-261, default 261):

18: Using default value 261

19:

20: Command (m for help): w

21: The partition table has been altered!

22:

23: Calling ioctl() to re-read partition table.

同樣方法對其他裝置進行分區。(實際過程中也可以對整個裝置進行格式化而不進行分區,本例中就是這樣做的,是以沒有分區這一步。)

3.5 格式化檔案系統

1: ?格式化檔案系統(任意一台節點)

2: [[email protected]]# mkfs.ocfs2 /dev/mapper/hana-8.0t

3: [[email protected]]# mkfs.ocfs2 /dev/mapper/hana2.0t

4: [[email protected]]# mkfs.ocfs2 /dev/mapper/hana-500G

5: ?建立資料存放目錄

6: [[email protected]] mkdir /saphana

7: [[email protected]] mkdir -p /saphana/data

8: [[email protected]] mkdir -p /saphana/log

9: [[email protected]] mkdir -p /saphana/shared

10: ?挂載目錄

11: [[email protected]]#mount -t ocfs2 -o nointr /dev/mapper/hana-8.0T /saphana/data(重新開機後會失效)

12: [[email protected]]#mount -t ocfs2 -o nointr /dev/mapper/hana-2.0T /saphana/log(重新開機後會失效)

13: [[email protected]]#mount -t ocfs2 -o nointr /dev/mapper/hana-500G /saphana/shared(重新開機後會失效)

14: ?檢視挂載情況(每個節點)

15: [[email protected] ~]# df -h

16: Filesystem size Used Avali Use% Mount on

17: /dev/mapper/system-root 30G 4.8G 24G 17% /

18: Devtmprs 253G 320k 253G 1% /dev

19: tmpfs 380G 88k 380G 1% /dev/shm

20: /dev/sda1 152M 36M 108M 25% /boot

21: /dev/dm-11 8.0T 13G 8.0T 1% saphana/data

22: /dev/dm-6 2.0T 13G 2.0T 1% saphana/log

23: /dev/dm-11 500G 2.9G 298G 1% saphana/shared

24: ?把挂載檔案寫入到配置檔案裡,以免伺服器重新開機後失效(每個節點)

25: 編輯檔案 vi /etc/fstab在檔案末尾添加如下檔案:

26: /dev/mapper/hana-8.0T /saphana/data ocfs2 _netdev 0 0

27: /dev/mapper/hana-2.0T /saphana/log ocfs2 _netdev 0 0

28: /dev/mapper/hana-500G /saphana/shared ocfs2 _netdev 0 0

29: 儲存後,重新開機伺服器如果通過df -h指令,如下所示,則說明配置成功。

30: [[email protected] ~]# df -h

31: Filesystem size Used Avali Use% Mount on

32: /dev/mapper/system-root 30G 4.8G 24G 17% /

33: Devtmprs 253G 320k 253G 1% /dev

34: tmpfs 380G 88k 380G 1% /dev/shm

35: /dev/sda1 152M 36M 108M 25% /boot

36: /dev/dm-11 8.0T 13G 8.0T 1% saphana/data

37: /dev/dm-6 2.0T 13G 2.0T 1% saphana/log

38: /dev/dm-11 500G 2.9G 298G 1% saphana/shared

3.6 重新開機相關服務

重新開機服務(每個節點)

1: [[email protected] ~]# service ocfs2 restart

2: [[email protected] ~]# service o2cb restart

3: Unmounting ocfs2_dlmfs filesystem: OK

4: Unloading module “ocfs2_dlmfs”: OK

5: Unmounting configfs filesystem: OK

6: Unloading module “configfs”: OK

7: Loading filesystem “configfs”: OK

8: Mounting configfs filesystem at /sys/kernel/config: OK

9: Loading filesystem “ocfs2_dlmfs”: OK

10: Mounting ocfs2_dlmfs filesystem at /dlm: OK

11: Starting O2CB cluster ocfs2: OK

12: ?加載開機運作(每個節點)

13: [[email protected]]#chkconfig --level 235 o2cb on

14: [[email protected]]#chkconfig --level 235 ocfs2 on

3.7 檢視服務狀态

1: ?檢視02cb服務(每個節點)

2: [email protected] ~]# service o2cb status

3: Driver for “configfs”: Loaded

4: Filesystem “configfs”: Mounted

5: Driver for “ocfs2_dlmfs”: Loaded

6: Filesystem “ocfs2_dlmfs”: Mounted

7: Checking O2CB cluster ocfs2: Online

8: Heartbeat dead threshold = 31

9: Network idle timeout: 30000

10: Network keepalive delay: 2000

11: Network reconnect delay: 2000

12: Checking O2CB heartbeat:active

13: ?檢視ocfs2服務(每個節點)

14: [email protected] ~]# service ocfs2 status

15: Configured OCFS2 mountpoints: /saphana/data /saphana/log /saphana/shared

16: Active OCFS2 mountpoints: /saphana/data /sap/hanalog

17: /saphana/shared

4 測試

4.1 建立檔案進行讀寫

測試很簡單在任何一台伺服器上,如linux001上的 /saphana/data 目錄下建立檔案linux,并且往裡面随便加入點内容,如果在另幾台伺服器的 /saphana/data 也可以看到linux檔案并且可以進行讀寫則代表成功。

5 常見錯誤

1: [[email protected] ~]# mount -t ocfs2 -o nointr /dev/mapper/hana-500G /saphana/

2: shared

3: mount.ocfs2: Device or resource busy while mounting /dev/sdb1 on /u02/ocfs_redo.

4: Check ‘dmesg’ for more information on this error.

5: ?裝置可能已經被挂載,可以卸妝後重新挂載。

6: [[email protected] ~]#umount /dev/mapper/hana-500G

7: [[email protected] ~]#umount /saphana/shared

8: [[email protected] ~]# mount -t ocfs2 -o nointr /dev/mapper/hana-500G /saphana/

9: Shared

10: mount.ocfs2: Unable to access cluster service while trying to join the group

11: ?df檢視沒有挂載上去,接下來重新開機了該節點的ocfs2服務,重新挂載正常。

12: mount -t ocfs2 -o nointr /dev/mapper/hana-500G /saphana/shared

13: ocfs2_hb_ctl: Bad magic number in superblock while reading uuid

14: mount.ocfs2: Error when attempting to run /sbin/ocfs2_hb_ctl: "Operation not

15: permitted

16: ?這個問題是由于ocfs2檔案檔案系統分區沒有格式化引起的錯誤,在挂載ocfs2檔案系統之前,

17: 用于這個檔案系統的分區一定要進行格式化.

繼續閱讀