前言
由于能搜尋到的資料特别少,參考了許多部落格和教程,整理了一版搭建HA MFS的教程,一方面記錄自己的收獲,另一方面希望可以幫助有需要的人。
簡介。
本文介紹如何利用Pacemaker和Corosync搭建高可用(HA)的MFS檔案系統。
- 步驟
(1)準備工作
5台虛拟機,OS本文采用centos7。準備一個虛拟IP(VIP)10.119.119.148,作為通路叢集服務的IP。
序号 | hostname | IP |
1 | etnode1 | 10.119.119.141 |
2 | etnode2 | 10.119.119.142 |
3 | etchunknode1 | 10.119.119.143 |
4 | etchunknode2 | 10.119.119.144 |
5 | etclientnode1 | 10.119.119.145 |
- 在etnode1、etnode2上面安裝moosefs-master,并配置其為高可用。
- 在etchunknode1、etchunknode2上面安裝moosefs-chunk,存儲檔案。
- etclientnode1為用戶端,挂載分布式檔案系統。
另外劃撥一塊硬碟或者邏輯盤,留作DRBD同步MooseFS中繼資料。
MFS安裝的官方教材文檔連結
![](https://img.laitimes.com/img/9ZDMuAjOiMmIsIjOiQnIsICdzFWRoRXdvN1LclHdpZXYyd2LcBzNvwVZ2x2bzNXak9CX90TQNNkRrFlQKBTSvwFbslmZvwFMwQzLcVmepNHdu9mZvwFVywUNMZTY18CX052bm9CX90TQNpXTU9keZRVT4FEVkZXUYpVd1kmYr50MZV3YyI2cKJDT29GRjBjUIF2LcRHelR3LcJzLctmch1mclRXY39TN0MDOygTM2EjMyEDM4EDMy8CX0Vmbu4GZzNmLn9Gbi1yZtl2Lc9CX6MHc0RHaiojIsJye.jpg)
/etc/hosts檔案如下
(2)安裝Pacemaker叢集軟體
在etnode1和etnode2上面安裝Pacemaker叢集軟體:
#yum install -y pacemaker pcs psmisc policu coreutils-python
注:本教程采用pcs管理叢集,也可以采用諸如crmsh工具,文法有所不同。
(3)叢集防火牆配置
在etnode1和etnode2上面進行配置:
# firewall-cmd --permanent --add-service=high-availablity
success
# firewall-cmd --reload
success
注:如果使用iptables防火牆,那麼需要開發TCP端口2224,3121,21064和UDP端口5405;或者直接關閉防火牆(不推薦)
(4)配置pcs守護程序開機啟動
在etnode1和etnode2上面進行配置:
# systemctl start pcsd //啟動pcs守護程序
# systemctl enable pcsd //設定其開機啟動
注:安裝的軟體包會建立一個無密碼的使用者hacluster,該賬戶在執行同步Corosync配置或者啟動/停止其他節點的叢集時候需要登入密碼。
設定hacluster使用者的密碼
# passwd hacluster
(5)配置Corosync
- 在etnode1或者etnode2中任一節點執行auth:
[[email protected] ~]# pcs cluster auth etnode1 etnode2
Username: hacluster
Password:
etnode1: Authorized
etnode2: Authorized
- 在同一節點使用pcs cluster setup生成并同步Corosync配置:
[[email protected] ~] pcs cluster setup --name etcluster etnode1 etnode2
shutting down pacemaker/corosync services...
Redircting to /bin/systeml stop pacemaker.service
Redircting to /bin/systeml stop corosync.service
killing any remaining services...
Removing all cluster configure files...
etnode1: Succeeded
etnode2: Succeeded
(6)啟動并驗證叢集。
[[email protected] ~]# pcs cluster start --all
etnode1: Starting cluster...
etnode2: Starting cluster...
(7)驗證Corosync安裝
[[email protected] ~]#corosync-cfgtool -s
然後,檢查menbership和quorum APIs:
[[email protected] ~]# corosync-cmapctl | grep members
檢視叢集狀态:
[[email protected] ~]# pcs status
(8)添加ClusterIP資源(即前述VIP)
[[email protected] ~]# pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=10.119.119.140 cidr_netmask=32 op monitor interval=30s
[[email protected] ~]# pcs reource //檢視資源
(9)使用DRBD複制存儲
在etnode1和etnode2上面 進行配置 :
- 安裝drbd
# yun install -y kmod-drbd84 drbd-utils
注:此drbd-utils在centos7.1上有bug,可以通過腳本修複。
讓SELinux為drbd放行:
# semanage permissive -a drbd_t
配置防火牆為各自主機開放7789端口:
[[email protected] ~]# firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="10.119.119.142" port port=7789 protocol="tcp" accept'
[[email protected] ~]# firewall-cmd --reload
[[email protected] ~]# firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="10.119.119.141" port port=7789 protocol="tcp" accept'
[[email protected] ~]# firewall-cmd --reload
- 為DRBD配置設定磁盤空間
drbd在每個節點上都需要自己的塊裝置,可以是實體磁盤分區,也可以是邏輯卷分區。
# lvcreate --name drbd-et-LV --size 8G etdataVG
實體卷、卷組、邏輯卷資訊如下所示:
- 配置DRBD
# cat <<END >/etc/drbd.d/mfsdata.res
resource mfsdata {
protocol C;
meta-disk internal;
device /dev/drbd1;
syncer {
verify-alg sha1;
}
net {
allow-two-primaries;
}
on etnode1 {
disk /dev/etdataVG/drbd-et-LV;
address 10.119.119.141:7789;
}
on etnode2 {
disk /dev/etdataVG/drbd-et-LV;
address 10.119.119.142:7789;
}
}
END
- 初始化DRBD
# drbdadm create-md mfsdata
initializing activity log
NOT initializing bitmap
Writing meta data...
New drbd meta data block successfully created.
# modprobe drbd
# drbdadm up mfsdata
- brdb裝置的叢集配置
[[email protected] ~]# pcs cluster cib drbd_cfg
[[email protected] ~]# pcs -f drbd_cfg resource create MFSData ocf:linbit:drbd drbd_resource=mfsdata op monitor interval=60s
[[email protected] ~]# pcs -f drbd_cfg resource master MFSDataClone MFSData master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
[[email protected] ~]# pcs -f drbd_cfg resource show //檢視輸出,确認配置是否正确
[[email protected] ~]# pcs cluster cib-push drbd_cfg //将修改的配置推送到CIB(也即推送到整個叢集)
CIB updated
(10)安裝moosefs-master、moosefs-cgi。
在etnode1和etnode2上面 進行配置 :
# curl "http://ppa.moosefs.com/RPM-GPG-KEY-MooseFS" > /etc/pki/rpm-gpg/RPM-GPG-KEY-MooseFS
# curl "http://ppa.moosefs.com/MooseFS-3-el7.repo" > /etc/yum.repos.d/MooseFS.repo
# yum update
# yum install moosefs-master //安裝mfsmaster
# systemctl disable moosefs-master.service //禁止開機啟動
配置mfsmaster,建立MooseFS存放中繼資料的目錄/mfsdata:
# mkdir /mfsdata
# vim /etc/mfs/mfsmaster.cfg
安裝moosefs cgi:
# yum install moosefs-cgi
# yum install moosefs-cgiserv
# yum install moosefs-cli
開放防火牆:
# firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="10.119.119.0/24" port port="9419-9421" protocol="tcp" accept'
# firewall-cmd --permanent --add-rich-rule='rule family="ipv4" port port="9425" protocol="tcp" accept'
編寫service腳本,儲存為/etc/init.d/mfsmasterctl,設定權限755,腳本内容如下:
#!/bin/sh
#
# mfsmasterctl - this script start and stop the mfs metaserver daemon
#
# chkconfig: 345 91 10
# description: startupscript for mfs metaserver
case "$1" in
start)
mfsmaster start && mfscgiserv start
;;
stop)
mfsmaster stop && mfscgiserv stop
;;
restart)
mfsmaster restart && mfscgiserv restart
;;
status)
mfsmaster test && mfscgiserv test
;;
*)
echo $"Usage: $0 start|stop|restart|status"
exit 1
esac
exit 0
如下圖所示:
建立mfsmaster服務的資源,配置如下:
[[email protected] ~]# pcs resource create EtMfsMaster lsb:mfsmasterctl op monitor interval=20s timeout=15s on-fail=restart
(11)配置叢集存放MFS中繼資料的檔案系統:
[[email protected] ~]# pcs cluster cib fs_cfg
[[email protected] ~]# pcs -f fs_cfg resource create ETMFS Filesystem device="/dev/drbd1" directory="/mfsdata" fstype="xfs"
[[email protected] ~]# pcs -f fs_cfg constraint colocation add ETMFS with MFSDataClone INFINITY with-rsc-role=Master
[[email protected] ~]# pcs -f fs_cfg constraint order promote MFSDataClone then start ETMFS
[[email protected] ~]# pcs -f fs_cfg constraint colocation add EtMfsMaster with ETMFS INFINITY
[[email protected] ~]# pcs -f fs_cfg constraint order ETMFS then EtMfsMaster
[[email protected] ~]# pcs -f fs_cfg constraint //檢視配置的限制
将配置推送至整個叢集:
[[email protected] ~]# pcs cluster cib-push fs_cfg
(12)分别在etchunknode1、etchunknode2安裝chunkserver
在檔案存儲伺服器chunkserver上面配置MFSmaster的VIP(mfsmaster為hostname,指向的IP為10.119.119.148):
# yum install moosefs-chunkserver
# vim /etc/mfs/mfschunkserver.cfg
# firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="10.119.119.0/24" port port="9422" protocol="tcp" accept'
(13)在etclientnode1上面安裝用戶端
[[email protected] ~]# yum install moosefs-client
[[email protected] ~]# mfsmount /mnt/mfs/ -H mfsmaster
(14)啟動高可用MFS
啟動chunkserver:
在etclientnode1上面挂載MFS:
至此叢集搭建完成