天天看点

drbd+MFS+pacemaker+rocosync实现高可用集群架构

一、软件简介

DRBD是一个用软件实现的、无共享的、服务器之间镜像块设备内容的存储复制解决方案。

MooseFS是一个具有容错性的网络分布式文件系统。它把数据分散存放在多个物理服务器上,而呈现给

用户的则是一个统一的资源。

Moosefs:(mfs)存储海量小文件,支持FUSE。(国内企业使用比较多)

Pacemaker是一个集群资源管理器。它利用集群基础构件(corosync)提供的消息和成员管理能力来探

测并从节点或资源级别的故障中恢复,以实现群集服务的最大可用性。

尤为重要的是Pacemaker不是一个heartbeat的分支,似乎很多人存在这样的误解。Pacemaker是CRM项

目(亦名V2资源管理器)的延续,该项目最初是为heartbeat而开发,但目前已经成为独立项目。

Corosync是集群管理套件的一部分,它在传递信息的时候可以通过一个简单的配置文件来定义信息传递

的方式和协议等。

CRM是一个命令行基于群集配置和管理工具。其目标是尽可能地协助基于pacemaker的高可用性集群的

配置和维护。最为重要的是crm提供了交互界面,更加容易排错。

二、需求

1、解决mfs-master单点故障

2、提供图片服务器之类的分布式存储,也可以在其他集群中使用。

3、实现drbd的主备容灾备份。

4、实现心跳检测

5、实现服务(资源)的检测及切换

6、实现高可用集群

三、平台环境

OS:CentOS Linux release 7.3.1611 (Core)

kernel:3.10.0-514.el7.x86_64

网络信息规划如下表:

名称 hostname IP
VIP 飘移 192.168.40.200
mfs-master1(drbd) node4 192.168.40.131
mfs-master2(drbd) node5 192.168.40.132
mfs-metalog server node6 192.168.40.133
mfs-chunk server1 node7 192.168.40.134
mfs-chunk server2 node8 192.168.40.134
client node1 192.168.40.128

网络拓扑如下:

drbd+MFS+pacemaker+rocosync实现高可用集群架构

四、环境准备

1、 修改hosts文件保证hosts之间能够互相访问。

[[email protected] ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.40.131 node4
192.168.40.132 node5
192.168.40.133 node6
192.168.40.134 node7
192.168.40.135 node8
           

2、同步时间

ntpdate cn.pool.ntp.org
           

3、mfs的各个server之间实现ssh互信(共5台server),命令如下:

ssh-keygen
ssh-copy-id node#主机名或者IP
           

五、安装软件

1、安装drbd

drbd+MFS+pacemaker+rocosync实现高可用集群架构

在node4和node5先划分出一个分区,而且不格式化那么快。如下:

[[email protected] ~]# fdisk -l /dev/sdb

Disk /dev/sdb: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xa56b82b0

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    10485759     5241856   83  Linux
           
[[email protected] ~]# fdisk -l /dev/sdb

Disk /dev/sdb: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x066827f3

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    10485759     5241856   83  Linux
           

安装derb并安装drbd

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
yum install -y kmod-drbd84 drbd84-utils
           

配置drbd全局配置文件

[r[email protected] mfs]# cat /etc/drbd.d/global_common.conf 
# DRBD is the result of over a decade of development by LINBIT.
# In case you need professional services for DRBD or have
# feature requests visit http://www.linbit.com

global {
	usage-count no;

	# Decide what kind of udev symlinks you want for "implicit" volumes
	# (those without explicit volume <vnr> {} block, implied vnr=0):
	# /dev/drbd/by-resource/<resource>/<vnr>   (explicit volumes)
	# /dev/drbd/by-resource/<resource>         (default for implict)
	udev-always-use-vnr; # treat implicit the same as explicit volumes

	# minor-count dialog-refresh disable-ip-verification
	# cmd-timeout-short 5; cmd-timeout-medium 121; cmd-timeout-long 600;
}

common {
	protocol C;
	handlers {
		# These are EXAMPLE handlers only.
		# They may have severe implications,
		# like hard resetting the node under certain circumstances.
		# Be careful when choosing your poison.

		pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
		pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
		local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
		# fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
		# split-brain "/usr/lib/drbd/notify-split-brain.sh root";
		# out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
		# before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
		# after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
		# quorum-lost "/usr/lib/drbd/notify-quorum-lost.sh root";
	}

	startup {
		# wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb
	}

	options {
		# cpu-mask on-no-data-accessible

		# RECOMMENDED for three or more storage nodes with DRBD 9:
		# quorum majority;
		# on-no-quorum suspend-io | io-error;
	}

	disk {
		on-io-error detach;
		# size on-io-error fencing disk-barrier disk-flushes
		# disk-drain md-flushes resync-rate resync-after al-extents
                # c-plan-ahead c-delay-target c-fill-target c-max-rate
                # c-min-rate disk-timeout
	}

	net {
		# protocol timeout max-epoch-size max-buffers
		# connect-int ping-int sndbuf-size rcvbuf-size ko-count
		# allow-two-primaries cram-hmac-alg shared-secret after-sb-0pri
		# after-sb-1pri after-sb-2pri always-asbp rr-conflict
		# ping-timeout data-integrity-alg tcp-cork on-congestion
		# congestion-fill congestion-extents csums-alg verify-alg
		# use-rle
	}
	syncer {
	    rate 1024M;
	}
}
           
/etc/drbd.conf #主配置文件 
/etc/drbd.d/global_common.conf #全局配置文件
           
[[email protected] mfs]# cat /etc/drbd.d/mfs.res 
resource mfs{
protocol C;
meta-disk internal;
device /dev/drbd1;
syncer{
verify-alg sha1;
}
net{
allow-two-primaries;
}
on node4 {
disk /dev/sdb1;
address 192.168.40.131:7789;
}
on node5 {
disk /dev/sdb1;
address 192.168.40.132:7789;
}
}
           

把配置文件 copy 到对面的机器上:

[[email protected] ~]# scp -rp /etc/drbd.d/* node5:/etc/drbd.d/
global_common.conf                                               100% 2618     2.6KB/s   00:00    
mfs.res                                                          100%  248     0.2KB/s   00:00    
           

在 node4 上面启动:(注意:我在此处遇到报错,如果你也遇到错误或者其他配置遇错也一样 ,

那就请看文章末尾的报错链接,谢谢。)

[[email protected] ~]# drbdadm create-md mfs
initializing activity log
initializing bitmap (160 KB) to all zero
Writing meta data...
New drbd meta data block successfully created.
           

查看内核是否已经加载了模块:

[[email protected] ~]# modprobe drbd
[[email protected] ~]# lsmod | grep drbd
drbd                  396875  0 
libcrc32c              12644  4 xfs,drbd,nf_nat,nf_conntrack
           

启动mfs资源:

[[email protected] ~]# drbdadm up mfs
[[email protected] ~]# drbdadm --force primary mfs
[[email protected] ~]# cat /proc/drbd 
version: 8.4.10-1 (api:1/proto:86-101)
GIT-hash: a4d5de01fffd7e4cde48a080e2c686f9e8cebf4c build by [email protected], 2017-09-15 14:23:22

 1: cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown C r----s
    ns:0 nr:0 dw:0 dr:912 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:5241660
           

在node5执行

[[email protected] ~]# drbdadm create-md mfs
initializing activity log
initializing bitmap (160 KB) to all zero
Writing meta data...
New drbd meta data block successfully created.
[[email protected] ~]# modprobe drbd
[[email protected] ~]# drbdadm up mfs
           

然后 可以查看数据同步的状态:(需要等待一段时间)

[[email protected] ~]# cat /proc/drbd
version: 8.4.10-1 (api:1/proto:86-101)
GIT-hash: a4d5de01fffd7e4cde48a080e2c686f9e8cebf4c build by [email protected], 2017-09-15 14:23:22

 1: cs:SyncTarget ro:Secondary/Primary ds:Inconsistent/UpToDate C r-----
    ns:0 nr:151552 dw:151552 dr:0 al:8 bm:0 lo:1 pe:3 ua:0 ap:0 ep:1 wo:f oos:5090108
	[>....................] sync'ed:  3.0% (4968/5116)M
	finish: 0:03:54 speed: 21,648 (21,648) want: 36,640 K/sec
           

格式化并挂载看看能否正常使用

[[email protected] ~]# mkfs.xfs -f /dev/drbd1
meta-data=/dev/drbd1             isize=512    agcount=4, agsize=327604 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=1310415, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
           
[[email protected] ~]# mount /dev/drbd1 /mnt
[[email protected] ~]# df -h
Filesystem           Size  Used Avail Use% Mounted on
/dev/mapper/cl-root   17G  2.1G   15G  13% /
devtmpfs             478M     0  478M   0% /dev
tmpfs                489M     0  489M   0% /dev/shm
tmpfs                489M  6.6M  482M   2% /run
tmpfs                489M     0  489M   0% /sys/fs/cgroup
/dev/sda1           1014M  167M  848M  17% /boot
tmpfs                 98M     0   98M   0% /run/user/0
/dev/drbd1           5.0G   33M  5.0G   1% /mnt
           

这就证明drbd可以正常使用了,取消挂载

[[email protected] ~]# umount /mnt
           

2、编译安装MFS

这次才使用的是moosefs,目前是国内的主流。

这次测试的版本moosefs-3.0.96。

安装依赖包

yum install zlib-devel gcc -y
           

上传moosefs-3.0.96,也就是v3.0.96.tar.gz,或者去GitHub下载,

wgethttps://github.com/moosefs/moosefs/archive/v3.0.96.tar.gz

后续将用到crmsh

[[email protected] src]# ls
crmsh-2.3.2.tar  v3.0.96.tar.gz
[[email protected] src]# scp v3.0.96.tar.gz node5:/usr/local/src/
v3.0.96.tar.gz                                                   100% 1092KB   1.1MB/s   00:00    
[[email protected] src]# scp v3.0.96.tar.gz node6:/usr/local/src/
v3.0.96.tar.gz                                                   100% 1092KB   1.1MB/s   00:00    
[[email protected] src]# scp v3.0.96.tar.gz node7:/usr/local/src/
v3.0.96.tar.gz                                                   100% 1092KB   1.1MB/s   00:00    
[[email protected] src]# scp v3.0.96.tar.gz node8:/usr/local/src/
v3.0.96.tar.gz                                                   100% 1092KB   1.1MB/s   00:00    
[[email protected] src]# scp v3.0.96.tar.gz node1:/usr/local/src/
           

创建mfs用户,切记各个mfs-server的mfs的UID必须一致。否者会失败

[[email protected] src]# useradd -u 1000 mfs
[[email protected] src]# useradd -u 1000 mfs
[[email protected] src]# useradd -u 1000 mfs
[[email protected] src]# useradd -u 1000 mfs
[[email protected] src]# useradd -u 1000 mfs
           

挂载/dev/drbd1到/usr/local/mfs( 只需要在一台mfsmaster上安装即可)

[[email protected] src]# drbd-overview
NOTE: drbd-overview will be deprecated soon.
Please consider using drbdtop.

 1:mfs/0  Connected Primary/Secondary UpToDate/UpToDate 
[[email protected] src]# mkdir /usr/local/mfs
[[email protected] src]# chown -R mfs:mfs /usr/local/mfs
[[email protected] src]# mount /dev/drbd1 /usr/local/mfs
           

开始编译 moosefs-3.0.96

[[email protected] src]# tar -xf v3.0.96.tar.gz 
[[email protected] src]# cd /usr/local/src/moosefs-3.0.96/
[[email protected] moosefs-3.0.96]#  ./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs --disable-mfschunkserver --disable-mfsmount
           

由于node4是mfs-master,所以不需要装载--disable-mfschunkserver、--disable-mfsmount

[[email protected] moosefs-3.0.96]# make && make install
           

配置master:

[[email protected] mfs]# pwd
/usr/local/mfs/etc/mfs
[[email protected] mfs]# ls
mfsexports.cfg.sample  mfsmaster.cfg.sample  mfsmetalogger.cfg.sample  mfstopology.cfg.sample
[[email protected] mfs]# cp mfsexports.cfg.sample mfsexports.cfg
[[email protected] mfs]# cp mfsmaster.cfg.sample mfsmaster.cfg
           

对于mfsmaster.cfg文件, 因为是官方的,默认配置,我们投入即可使用。

需要修改的文件是mfsexports.cfg添加挂载目录的权限及密码(只要在最后添加)

vim mfsexports.cfg

*            /        rw,alldirs,mapall=mfs:mfs,password=aizhen

*            .          rw 
           

开启元数据文件默认是empty文件,需要我们手工打开:

[[email protected] mfs]# cp /usr/local/mfs/var/mfs/metadata.mfs.empty /usr/local/mfs/var/mfs/metadata.mfs
           

启动master:

[[email protected] mfs]# /usr/local/mfs/sbin/mfsmaster start
open files limit has been set to: 16384
working directory: /usr/local/mfs/var/mfs
lockfile created and locked
initializing mfsmaster modules ...
exports file has been loaded
mfstopology configuration file (/usr/local/mfs/etc/mfstopology.cfg) not found - using defaults
loading metadata ...
metadata file has been loaded
no charts data file - initializing empty charts
master <-> metaloggers module: listen on *:9419
master <-> chunkservers module: listen on *:9420
main master server module: listen on *:9421
mfsmaster daemon initialized properly
           

现在编写mfsmaster的启动脚本,才用systemctl启动。

[[email protected] mfs]# cat /usr/lib/systemd/system/mfsmaster.service 
[Unit]
Description=mfs
After=network.target
  
[Service]
Type=forking
ExecStart=/usr/local/mfs/sbin/mfsmaster start
ExecStop=/usr/local/mfs/sbin/mfsmaster stop
PrivateTmp=true
  
[Install]
WantedBy=multi-user.target

[[email protected] mfs]# chmod 775 /usr/lib/systemd/system/mfsmaster.service 
           

测试mfsmaster.service脚本能否正常使用

[[email protected] mfs]# systemctl start mfsmaster.service
[[email protected] mfs]# systemctl status mfsmaster.service
● mfsmaster.service - mfs
   Loaded: loaded (/usr/lib/systemd/system/mfsmaster.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2017-10-28 13:04:40 EDT; 7s ago
  Process: 7555 ExecStart=/usr/local/mfs/sbin/mfsmaster start (code=exited, status=0/SUCCESS)
 Main PID: 7557 (mfsmaster)
   CGroup: /system.slice/mfsmaster.service
           └─7557 /usr/local/mfs/sbin/mfsmaster start
           
[[email protected] mfs]# netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:9419            0.0.0.0:*               LISTEN      7557/mfsmaster      
tcp        0      0 0.0.0.0:9420            0.0.0.0:*               LISTEN      7557/mfsmaster      
tcp        0      0 0.0.0.0:9421            0.0.0.0:*               LISTEN      7557/mfsmaster     
           

设置mfsmaster开机自启动:

[[email protected] mfs]# systemctl enable mfsmaster
Created symlink from /etc/systemd/system/multi-user.target.wants/mfsmaster.service to /usr/lib/systemd/system/mfsmaster.service.
           

将mfsmaster.service复制到node5

[[email protected] mfs]# scp /usr/lib/systemd/system/mfsmaster.service node5:/usr/lib/systemd/system/
mfsmaster.service                                                100%  217     0.2KB/s   00:00    
           

在node5也设置mfsmaster开机自启动;

[[email protected] ~]# systemctl enable mfsmaster
Created symlink from /etc/systemd/system/multi-user.target.wants/mfsmaster.service to /usr/lib/systemd/system/mfsmaster.service.
           

在node5创建必须的目录:

[[email protected] ~]# mkdir /usr/local/mfs
[[email protected] ~]# chown -R mfs:mfs /usr/local/mfs
           

现在开始测试drbd能否主从正常切换,并且实现mfsmaster的切换(切换到node5)

在node4上

[[email protected] mfs]# systemctl stop mfsmaster
[[email protected] mfs]# cd 
[[email protected] ~]# umount /usr/local/mfs
[[email protected] ~]# drbdadm secondary mfs
           

在node5上

[[email protected] ~]# mkdir /usr/local/mfs
[[email protected] ~]# chown -R mfs:mfs /usr/local/mfs
[[email protected] ~]# drbdadm primary mfs
[[email protected] ~]# mount /dev/drbd1 /usr/local/mfs
           
[[email protected] ~]# systemctl start mfsmaster
[[email protected] ~]# systemctl status mfsmaster
● mfsmaster.service - mfs
   Loaded: loaded (/usr/lib/systemd/system/mfsmaster.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2017-10-28 13:12:01 EDT; 13s ago
  Process: 1646 ExecStart=/usr/local/mfs/sbin/mfsmaster start (code=exited, status=0/SUCCESS)
 Main PID: 1648 (mfsmaster)
   CGroup: /system.slice/mfsmaster.service
           └─1648 /usr/local/mfs/sbin/mfsmaster start
           
[[email protected] ~]# netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:9419            0.0.0.0:*               LISTEN      1648/mfsmaster      
tcp        0      0 0.0.0.0:9420            0.0.0.0:*               LISTEN      1648/mfsmaster     
           

以上说明测试没问题!!!

在node6安装metalogger server

啰嗦一下:Metalogger Server 是 Master Server 的备份服务器。因此,Metalogger Server 的安装步骤和 Master Server 的安装步骤相同。并且,最好使用和 Master Server 配置一样的服务器来做 Metalogger Server。这样,一旦主服务器master宕机失效,我们只要导入备份信息changelogs到元数据文件,备份服务器可直接接替故障的master继续提供服务。可以根据成本来决定是否需要添加这台服务器。

编译安装metalogger server

[[email protected] src]# tar -xf v3.0.96.tar.gz 
[[email protected] src]# cd moosefs-3.0.96/
[[email protected] moosefs-3.0.96]#  ./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs  --disable-mfschunkserver --disable-mfsmount
           
[[email protected] moosefs-3.0.96]# make && make install
           

配置metalogger server

[[email protected] moosefs-3.0.96]# cd /usr/local/mfs/etc/mfs/
[[email protected] mfs]# ls
mfsexports.cfg.sample  mfsmaster.cfg.sample  mfsmetalogger.cfg.sample  mfstopology.cfg.sample
[[email protected] mfs]# cp mfsmetalogger.cfg.sample mfsmetalogger.cfg
[[email protected] mfs]# vim mfsmetalogger.cfg
           
MASTER_HOST = 192.168.40.200
           

将其指向VIP 或者先指向某台mfsmaster做测试

启动metalogger server:(启动没问题就写启动脚本,并设置开机自启动)

[[email protected] ~]# /usr/local/mfs/sbin/mfsmetalogger start
open files limit has been set to: 4096
working directory: /usr/local/mfs/var/mfs
lockfile created and locked
initializing mfsmetalogger modules ...
mfsmetalogger daemon initialized properly
[[email protected] ~]# mv /usr/lib/systemd/system/mfsmaster.service /usr/lib/systemd/system/mfsmetalog.service 
           
[[email protected] ~]# cat /usr/lib/systemd/system/mfsmetalog.service 
[Unit]
Description=mfs
After=network.target
  
[Service]
Type=forking
ExecStart=/usr/local/mfs/sbin/mfsmetalogger start
ExecStop=/usr/local/mfs/sbin/mfsmetalogger stop
PrivateTmp=true
  
[Install]
WantedBy=multi-user.target
           
[[email protected] ~]# /usr/local/mfs/sbin/mfsmetalogger stop
sending SIGTERM to lock owner (pid:2093)
waiting for termination terminated
[[email protected] ~]# systemctl start mfsmetalog
[[email protected] ~]# systemctl status mfsmetalog
● mfsmetalog.service - mfs
   Loaded: loaded (/usr/lib/systemd/system/mfsmetalog.service; disabled; vendor preset: disabled)
   Active: active (running) since Sat 2017-10-28 20:26:28 EDT; 12s ago
  Process: 2113 ExecStart=/usr/local/mfs/sbin/mfsmetalogger start (code=exited, status=0/SUCCESS)
 Main PID: 2115 (mfsmetalogger)
   CGroup: /system.slice/mfsmetalog.service
           └─2115 /usr/local/mfs/sbin/mfsmetalogger start
           
[[email protected] ~]# systemctl enable mfsmetalog
Created symlink from /etc/systemd/system/multi-user.target.wants/mfsmetalog.service to /usr/lib/systemd/system/mfsmetalog.service.
           

在node7、node8编译安装chunk server。

node7和node8配置一样,以node7为例:

[[email protected] src]# tar -xf v3.0.96.tar.gz 
[[email protected] src]# cd moosefs-3.0.96/
[[email protected] moosefs-3.0.96]# ./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs  --disable-mfsmaster --disable-mfsmount
           
[[email protected] moosefs-3.0.96]# make && make install 
           

配置mfschunkserver.cfg的文件

[[email protected] moosefs-3.0.96]# cd /usr/local/mfs/etc/mfs/
[[email protected] mfs]# ls
mfschunkserver.cfg.sample  mfshdd.cfg.sample  mfsmetalogger.cfg.sample
[[email protected] mfs]# cp mfschunkserver.cfg.sample mfschunkserver.cfg
[[email protected] mfs]# cp mfshdd.cfg.sample mfshdd.cfg
[[email protected] mfs]# vim mfschunkserver.cfg
           
MASTER_HOST = 192.168.40.200
           

将其指向VIP 或者先指向某台mfsmaster做测试

配置mfshdd.cfg的文件

mfshdd.cfg该文件用来设置你将 Chunk Server 的哪个目录共享出去给 Master Server进行管理。当然,虽然这里填写的是共享的目录,但是这个目录后面最好是一个单独的分区。

[[email protected] mfs]# mkdir /mfsdata
[[email protected] mfs]# chown -R mfs:mfs /mfsdata
[[email protected] mfs]# vim mfshdd.cfg
           

直接在最后一行加

/mfsdata
           

启动chunkserver:(启动没问题就写启动脚本,并设置开机自启动)

[[email protected] mfs]# /usr/local/mfs/sbin/mfschunkserver start
           
[[email protected] mfs]# /usr/local/mfs/sbin/mfschunkserver stop
           
[[email protected] mfs]# cat /usr/lib/systemd/system/mfschunk.service
[Unit]
Description=mfs
After=network.target
  
[Service]
Type=forking
ExecStart=/usr/local/mfs/sbin/mfschunkserver start
ExecStop=/usr/local/mfs/sbin/mfschunkserver stop
PrivateTmp=true
  
[Install]
WantedBy=multi-user.target
           
[[email protected] mfs]# systemctl enable mfschunk
           

MFS的服务已经配置完成。

先停止node4、node5的服务

[[email protected] ~]# systemctl stop mfsmaster
[[email protected] ~]# systemctl stop drbd
           
[[email protected] ~]# systemctl stop mfsmaster
[[email protected] ~]# systemctl stop drbd
           

3、安装corosync和pacemaker

在node4、node5解决依赖问题

[[email protected] ~]# yum install -y pacemaker pcs psmisc policycoreutils-python
           

生命周期管理工具:

Pcs:agent(pcsd)

Crash:pssh

开启pcs并设置开机自启动:

[[email protected] ~]# systemctl start pcsd
[[email protected] ~]# systemctl enable pcsd
           

修改hacluster用户的密码:

[[email protected] ~]# echo 000000 | passwd --stdin hacluster
           
[[email protected] ~]# echo 000000 | passwd --stdin hacluster
           

注册pcs集群主机(默认用户是hacluster和密码){只要在一台node执行}

[[email protected] ~]# pcs cluster auth node4 node5
Username: hacluster
Password: 
node5: Authorized
node4: Authorized
           

在集群上注册两台集群:

[[email protected] ~]# pcs cluster setup --name mysqlcluster node4 node5 --force
           
Destroying cluster on nodes: node4, node5...
node5: Stopping Cluster (pacemaker)...
node4: Stopping Cluster (pacemaker)...
node5: Successfully destroyed cluster
node4: Successfully destroyed cluster

Sending 'pacemaker_remote authkey' to 'node4', 'node5'
node5: successful distribution of the file 'pacemaker_remote authkey'
node4: successful distribution of the file 'pacemaker_remote authkey'
Sending cluster config files to the nodes...
node4: Succeeded
node5: Succeeded

Synchronizing pcsd certificates on nodes node4, node5...
node5: Success
node4: Success
Restarting pcsd on the nodes in order to reload the certificates...
node5: Success
node4: Success
           

我们看到生成来corosync.conf配置文件:

[[email protected] ~]# ll /etc/corosync/
total 16
-rw-r--r-- 1 root root  385 Oct 28 21:37 corosync.conf
-rw-r--r-- 1 root root 2881 Sep  6 12:53 corosync.conf.example
-rw-r--r-- 1 root root  767 Sep  6 12:53 corosync.conf.example.udpu
-rw-r--r-- 1 root root 3278 Sep  6 12:53 corosync.xml.example
drwxr-xr-x 2 root root    6 Sep  6 12:53 uidgid.d
           

启动集群:

[[email protected] ~]# pcs cluster start --all
node4: Starting Cluster...
node5: Starting Cluster...
           

## 相当于启动来pacemaker和corosync

在node4、node5将pacemaker和corosync设置为开机自启动:

[[email protected] ~]# systemctl enable pacemaker
[[email protected] ~]# systemctl enable corosync
           

## 因为我们没有配置STONITH设备,所以我们下面要关闭:

[[email protected] ~]# pcs property set stonith-enabled=false
           

集群我们可以下载安装crmsh来操作(从github来下载,然后解压直接安装):只在一个节点安装即可。

主要是crmsh使用交互界面,方便排错。

编译安装crmsh-2.3.2:

[[email protected] src]# tar -xf crmsh-2.3.2.tar 
[[email protected] src]# cd crmsh-2.3.2
[[email protected] crmsh-2.3.2]# python setup.py install
           

哈哈,就是喜欢这么和谐的界面:

[[email protected] ~]# crm 
crm(live)# status
Stack: corosync
Current DC: node4 (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorum
Last updated: Sat Oct 28 21:50:11 2017
Last change: Sat Oct 28 21:45:22 2017 by root via cibadmin on node5

2 nodes configured
0 resources configured

Online: [ node4 node5 ]

No resources

crm(live)# 
           

下面开始配置资源:

crm(live)configure# primitive mfs_drbd ocf:linbit:drbd params drbd_resource=mfs op monitor role=Master interval=10 timeout=20 op monitor role=Slave interval=20 timeout=20 op start timeout=240 op stop timeout=100
crm(live)configure# verify
crm(live)configure# 
           
crm(live)configure# ms ms_mfs_drbd mfs_drbd meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
crm(live)configure# verify
crm(live)configure# 
           

确定verify没问题即可(若有问题可以直接使用edit修改文件)

crm(live)configure# commit
           

配置挂载资源:

crm(live)configure# primitive mfsstore ocf:heartbeat:Filesystem params device=/dev/drbd1 directory=/usr/local/mfs fstype=xfs op start timeout=60 op stop timeout=60
crm(live)configure# verify
crm(live)configure# colocation ms_mfs_drbd_with_mfsstore inf: mfsstore ms_mfs_drbd
           
crm(live)configure# order ms_mfs_drbd_before_mystore Mandatory:  ms_mfs_drbd:promote mfsstore:start
           
crm(live)configure# verify 
crm(live)configure# commit
           

colocation是绑定亲缘关系, order是决定那个服务先启动(很重要)

配置mfs资源:

crm(live)configure# primitive mfs systemd:mfsmaster op monitor timeout=100 interval=30 op start timeout=100 interval=0 op stop timeout=100 interval=0
crm(live)configure# verify
crm(live)configure# 
crm(live)configure# colocation mfs_with_mystore inf: mfs mfsstore
crm(live)configure# order mfsstore_befor_mfs Mandatory: mfsstore mfs
crm(live)configure# verify
crm(live)configure# commit
crm(live)configure# 
           

配置VIP资源:

crm(live)configure# primitive vip ocf:heartbeat:IPaddr params ip=192.168.40.200
crm(live)configure# colocation vip_with_msf inf: vip mfs
crm(live)configure# verify
crm(live)configure# commit
           

使用show查看配置的内容:

crm(live)configure# show
node 1: node4
node 2: node5
primitive mfs systemd:mfsmaster \
        op monitor timeout=100 interval=30 \
        op start timeout=100 interval=0 \
        op stop timeout=100 interval=0
primitive mfs_drbd ocf:linbit:drbd \
        params drbd_resource=mfs \
        op monitor role=Master interval=10 timeout=20 \
        op monitor role=Slave interval=20 timeout=20 \
        op start timeout=240 interval=0 \
        op stop timeout=100 interval=0
primitive mfsstore Filesystem \
        params device="/dev/drbd1" directory="/usr/local/mfs" fstype=xfs \
        op start timeout=60 interval=0 \
        op stop timeout=60 interval=0
primitive vip IPaddr \
        params ip=192.168.40.200
ms ms_mfs_drbd mfs_drbd \
        meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
colocation mfs_with_mystore inf: mfs mfsstore
order mfsstore_befor_mfs Mandatory: mfsstore mfs
order ms_mfs_drbd_before_mystore Mandatory: ms_mfs_drbd:promote mfsstore:start
colocation ms_mfs_drbd_with_mfsstore inf: mfsstore ms_mfs_drbd
colocation vip_with_msf inf: vip mfs
property cib-bootstrap-options: \
        have-watchdog=false \
        dc-version=1.1.16-12.el7_4.4-94ff4df \
        cluster-infrastructure=corosync \
        cluster-name=mfscluster \
        stonith-enabled=false
           

所有服务端的所有配置已经配完!!!

查看各项资源已经正常使用

crm(live)# status
Stack: corosync
Current DC: node4 (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorum
Last updated: Sat Oct 28 22:21:47 2017
Last change: Sat Oct 28 22:17:18 2017 by root via cibadmin on node4

2 nodes configured
5 resources configured

Online: [ node4 node5 ]

Full list of resources:

 Master/Slave Set: ms_mfs_drbd [mfs_drbd]
     Masters: [ node5 ]
     Slaves: [ node4 ]
 mfsstore	(ocf::heartbeat:Filesystem):	Started node5
 mfs	(systemd:mfsmaster):	Started node5
 vip	(ocf::heartbeat:IPaddr):	Started node5
           

mfs服务器已经做好提供挂载服务。

下面开始配置客户端(也就是需要挂载分布式存储的后端服务器,例如:图片服务器之类等)

在node1解决依赖问题。(等待时间有点久)

[[email protected] etc]# yum install fuse fuse-devel zlib-devel gcc -y
           

编译安装moosefs-3.0.96

[[email protected] src]# tar -xf v3.0.96.tar.gz 
[[email protected] src]# cd moosefs-3.0.96/
[[email protected] moosefs-3.0.96]#  ./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs --disable-mfsmaster --disable-mfschunkserver --enable-mfsmount
           
[[email protected] moosefs-3.0.96]# make && make install 
           
[[email protected] moosefs-3.0.96]# mkdir /mfsdata
[[email protected] moosefs-3.0.96]# chown -R mfs:mfs /mfsdata
           

测试挂载:

[[email protected] moosefs-3.0.96]# /usr/local/mfs/bin/mfsmount /mfsdata -H 192.168.40.200 -p
MFS Password:
mfsmaster accepted connection with parameters: read-write,restricted_ip,map_all ; root mapped to nginx:nginx ; users mapped to nginx:nginx
[[email protected] moosefs-3.0.96]# df -h
Filesystem           Size  Used Avail Use% Mounted on
/dev/mapper/cl-root   17G  9.0G  8.1G  53% /
devtmpfs             478M     0  478M   0% /dev
tmpfs                489M     0  489M   0% /dev/shm
tmpfs                489M  6.6M  482M   2% /run
tmpfs                489M     0  489M   0% /sys/fs/cgroup
/dev/sda1           1014M  138M  877M  14% /boot
tmpfs                 98M     0   98M   0% /run/user/0
192.168.40.200:9421   34G  4.5G   30G  14% /mfsdata
           

密码是配置mfsmaster服务器的mfsexport.cfg配置的密码(我配置的是aizhen)

drbd+MFS+pacemaker+rocosync实现高可用集群架构

测试是否能正常写入数据:

[[email protected] moosefs-3.0.96]# cd /mfsdata/
[[email protected] mfsdata]# ls
[[email protected] mfsdata]# touch a.txt
[[email protected] mfsdata]# echo 123 >> a.txt
[[email protected] mfsdata]# cat a.txt 
123
           

至此,所有的配置已经完成。

六、注意事项及报错。

本次测试的所有注意事项及错误都在我的另外一篇博文,

主要是为了方便自己的排错。请谅解,

错误链接:点击打开链接

本文为博主原创,转载请注明本文的出处,谢谢

继续阅读