天天看點

RedHat6 叢集Fence【8】

RedHat6 叢集Fence

實驗目的:

掌握叢集Fence理論和配置

Fence概念:

在HA叢集壞境中,備份伺服器B通過心跳線來發送資料包來看伺服器A是否還活着,主伺服器A接收了大量的用戶端通路請求,伺服器A的CPU負載達到100%響應不過來了,資源已經耗盡,沒有辦法回複伺服器B資料包(回複資料包會延遲),這時伺服器B認為伺服器A已經挂了,于是備份伺服器B把資源奪過來,自己做主伺服器,過了一段時間伺服器A響應過來了,伺服器A覺得自己是老大,伺服器B覺得自己也是老大,他們兩個就掙着搶奪資源,叢集資源被多個節點占有,兩個伺服器同時向資源寫資料,破壞了資源的安全性和一緻性,這種情況的發生叫做“腦裂”。伺服器A負載過重,響應不過來了,有了Fence機制,Fence會自動的把伺服器A給Fence掉,阻止了“腦裂"的發生

Fence分類:

硬體Fence:電源Fence,通過關掉電源來踢掉壞的伺服器

軟體Fence:Fence卡(智能卡),通過線纜、軟體來踢掉壞的伺服器

實際壞境中,Fence卡連接配接的都是專線,使用專用的Fence網卡,不會占用資料傳輸線路,這樣,更能保證穩定及可靠性。

Fence卡的IP網絡和叢集網絡是互相依存的

實驗步驟:

由于實際情況的限制,沒有真實的Fence卡,所用我們用fence服務來虛拟出Fence卡

在iscsi伺服器上安裝fence服務,把iscsi伺服器當成fence卡

安裝fence卡,你隻需要安裝fence-virtd-libvirt.x86_64和fence-virtd-multicast.x86_64這兩個軟體就可以了

[root@node1 ~]# yum list  |  grep fence

Repository ResilientStorage is listed morethan once in the configuration

Repository ScalableFileSystem is listedmore than once in the configuration

fence-agents.x86_64                        3.1.5-10.el6                @Cluster

fence-virt.x86_64                          0.2.3-5.el6                 @Cluster

fence-virtd.x86_64                         0.2.3-5.el6                 base    

fence-virtd-checkpoint.x86_64              0.2.3-5.el6                 Cluster 

fence-virtd-libvirt.x86_64                 0.2.3-5.el6                 base    

fence-virtd-libvirt-qpid.x86_64            0.2.3-5.el6                 optional

fence-virtd-multicast.x86_64               0.2.3-5.el6                 base    

fence-virtd-serial.x86_64                  0.2.3-5.el6                 base  

[root@node1 ~]# yum  -y  install fence-virtd-libvirt.x86_64 fence-virtd-multicast.x86_64

[root@desktop24 ~]# mkdir /etc/cluster

[root@desktop24 ~]# cd /etc/cluster

[root@desktop24 cluster]# dd   if=/dev/random   of=/etc/cluster/fence_xvm.key  bs=4k  count=1   建立fence卡的key檔案,讓節點伺服器和Fence卡互相認識

0+1 records in

0+1 records out

73 bytes (73 B) copied, 0.000315063 s, 232kB/s

random:為随機數字生成器,生成的東西是唯一的

fence_xvm.key:Fence卡的key檔案,名字别改,fence_xvm是fence卡的一種類型

把fence的key檔案分發給各個節點伺服器

[root@desktop24cluster]#scp/etc/cluster/fence_xvm.key  172.17.0.1:/etc/cluster/fence_xvm.key

[email protected]'s password:

fence_xvm.key                                                             100%   73     0.1KB/s  00:00   

[root@desktop24 cluster]# scp  /etc/cluster/fence_xvm.key   172.17.0.2:/etc/cluster/fence_xvm.key

[email protected]'s password:

fence_xvm.key                                                             100%   73     0.1KB/s  00:00

對fence服務的配置檔案/etc/fence_virt.conf做配置政策:

[root@desktop24 cluster]# fence_virtd   -c

Module search path[/usr/lib64/fence-virt]:

//子產品搜尋路徑,回車使用預設的

Available backends:

   libvirt 0.1

Available listeners:

   multicast 1.1

Listener modules are responsible foraccepting requests

from fencing clients.

Listener module [multicast]:

//監聽多點傳播子產品,回車選擇預設

The multicast listener module is designedfor use environments

where the guests and hosts may communicateover a network using

multicast.

The multicast address is the address that aclient will use to

send fencing requests to fence_virtd.

Multicast IP Address[225.0.0.12]:

多點傳播IP位址選擇,回車選擇預設的225.0.0.12

Using ipv4 as family.

Multicast IP Port [1229]:

//多點傳播IP端口選擇,回車選擇預設的1229

Setting a preferred interface causes fence_virtdto listen only

on that interface.  Normally, it listens on all interfaces.

In environments where the virtual machinesare using the host

machine as a gateway, this *must* be set(typically to virbr0).

Set to 'none' for no interface.

Interface [none]: private

//Fence卡使用那個接口來和叢集網絡來通訊(fence卡的專用接口和叢集網絡在同一網段)

The key file is the shared key informationwhich is used to

authenticate fencing requests.  The contents of this file must

be distributed to each physical host andvirtual machine within

a cluster.

Key File[/etc/cluster/fence_xvm.key]:

//是否是對/etc/cluster/fence_xvm.key這個檔案做政策,回車選擇預設的

Backend modules are responsible for routingrequests to

the appropriate hypervisor or managementlayer.

Backend module [checkpoint]:libvirt

//子產品基于那個協定,填寫libvirt

The libvirt backend module is designed forsingle desktops or

servers. Do not use in environments where virtual machines

may be migrated between hosts.

Libvirt URI [qemu:///system]:

Configuration complete.

=== Begin Configuration ===

backends {

         libvirt{

                   uri= "qemu:///system";

         }

}

listeners {

         multicast{

                   interface= "eth0";

                   port= "1229";

                   family= "ipv4";

                   address= "225.0.0.12";

                   key_file= "/etc/cluster/fence_xvm.key";

fence_virtd {

         module_path= "/usr/lib64/fence-virt";

         backend= "libvirt";

         listener= "multicast";

=== End Configuration ===

Replace /etc/fence_virt.confwith the above [y/N]? y

//輸入y儲存配置

啟動fence服務:

[root@desktop24 cluster]# /etc/init.d/fence_virtd  restart

Stopping fence_virtd:                                      [FAILED]

Starting fence_virtd:                                      [  OK  ]

[root@desktop24 cluster]# chkconfig  fence_virtd on

把fence卡加入到叢集:

Fence Dervices中點選add來添加fence卡(虛拟的)

RedHat6 叢集Fence【8】

打開【Select  a  fence device】選擇【Fence virt(Multicast  Mode)】,名字寫這個Fence卡名字

RedHat6 叢集Fence【8】

虛拟的Fence裝置添加完成後,界面如下:

RedHat6 叢集Fence【8】

把Fence卡添加到各個節點中去:

添加fence卡到Node1中:

RedHat6 叢集Fence【8】

選擇【Node】,然後點選Node1節點的界面,找到下圖位置添加Fence

RedHat6 叢集Fence【8】

取名根據自己需要,隻要能識别即可

RedHat6 叢集Fence【8】

然後點選【Add Fence Instance】,添加Fence實咧

RedHat6 叢集Fence【8】

選擇Fence1這個Fence卡,這裡的domain指的的是node1.private.cluster0.example.com中的node1

RedHat6 叢集Fence【8】
RedHat6 叢集Fence【8】

添加Fence卡到Node2中:

把界面換到節點2中

RedHat6 叢集Fence【8】

找到下圖位置,點選【Add  Fence  Method】

RedHat6 叢集Fence【8】

給Fence  Method起名字

RedHat6 叢集Fence【8】

點選【Add Fence Instance】

RedHat6 叢集Fence【8】

選擇Fence1這個Fence卡,這裡的domain指的的是node2.private.cluster0.example.com中的node2

RedHat6 叢集Fence【8】
RedHat6 叢集Fence【8】

手動fence剔除:

[root@node2 ~]# fence_node  node1

繼續閱讀