環境說明:
作業系統: centos 6.5 x64,本文采用rpm方式安裝heartbeat+drbd,本文隻是試用heartbeat+drbd+nfs高可用基本功能。
app1: 192.168.0.24
app1: 192.168.0.25
vip : 192.168.0.26
[root@app1 soft]# vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.24 app1
192.168.0.25 app2
10.10.10.24 app1-priv
10.10.10.25 app2-priv
說明:10段是心跳ip, 192.168段是業務ip, 采用vip位址是192.168.0.26。
# rpm -ivh epel-release-6-8.noarch.rpm
# yum install heartbeat
說明:本文采用rpm方式包,提前下載下傳到本地的,再安裝heartbeat
[root@app1 ~]# cd soft/
[root@app1 soft]# ll
總用量 1924
-rw-r--r-- 1 root root 72744 6月 25 2012 cluster-glue-1.0.5-6.el6.x86_64.rpm
-rw-r--r-- 1 root root 119096 6月 25 2012 cluster-glue-libs-1.0.5-6.el6.x86_64.rpm
-rw-r--r-- 1 root root 165292 12月 3 2013 heartbeat-3.0.4-2.el6.x86_64.rpm
-rw-r--r-- 1 root root 269468 12月 3 2013 heartbeat-libs-3.0.4-2.el6.x86_64.rpm
-rw-r--r-- 1 root root 38264 10月 18 2014 perl-timedate-1.16-13.el6.noarch.rpm
-rw-r--r-- 1 root root 913840 7月 3 2011 pyxml-0.8.4-19.el6.x86_64.rpm
-rw-r--r-- 1 root root 374068 11月 10 20:45 resource-agents-3.9.5-24.el6_7.1.x86_64.rpm
[root@app1 soft]#
[root@app1 soft]# rpm -ivh *.rpm
warning: cluster-glue-1.0.5-6.el6.x86_64.rpm: header v3 rsa/sha1 signature, key id c105b9de: nokey
warning: heartbeat-3.0.4-2.el6.x86_64.rpm: header v3 rsa/sha256 signature, key id 0608b895: nokey
preparing... ########################################### [100%]
1:cluster-glue-libs ########################################### [ 14%]
2:resource-agents ########################################### [ 29%]
3:pyxml ########################################### [ 43%]
4:perl-timedate ########################################### [ 57%]
5:cluster-glue ########################################### [ 71%]
6:heartbeat-libs ########################################### [ 86%]
7:heartbeat ########################################### [100%]
[root@app1 soft]#
(1) 設定授權key
# vi /etc/ha.d/authkeys
auth 1
1 sha1 47e9336850f1db6fa58bc470bc9b7810eb397f04
# chmod 600 /etc/ha.d/authkeys
(2) 添加配置ha資源檔案
# vi /etc/ha.d/haresources
# 初始狀态伺服器綁定vip的位址在哪個伺服器、哪個網卡上,啟動什麼服務。
app1 ipaddr::192.168.0.26/24/eth0:1
#app1 ipaddr::192.168.0.26/24/eth0:1 drbddisk::data filesystem::/dev/drbd0::/data::ext4
(3) 配置heartbeat主配置檔案
app1上配置檔案:
# vi /etc/ha.d/ha.cf
debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
warntime 10
initdead 120
udpport 694
bcast eth1
ucast eth1 10.10.10.25
#mcast eth1 225.0.0.24 694 1 0
auto_failback on
node app1
node app2
crm no
#respawn hacluster /usr/lib64/heartbeat/ipfail
#ping 192.168.0.253
app2上配置檔案,與主配置檔案有些差別,需要修改。
deadtime 15
ucast eth1 10.10.10.24
#mcast eth1 225.0.0.25 694 1 0
# scp authkeys ha.cf haresources root@app2:/etc/ha.d/
root@app2's password:
authkeys 100% 56 0.1kb/s 00:00
ha.cf 100% 256 0.3kb/s 00:00
haresources 100% 78 0.1kb/s 00:00
節點1:
[root@app1 ha.d]# service heartbeat start
starting high-availability services: info: resource is stopped
done.
節點2:
[root@app2 ha.d]# service heartbeat start
(1) 手動切換成standby狀态
# /usr/share/heartbeat/hb_standby
going standby [all].
或者主伺服器 service heartbeat stop 也可以切換vip到備機上。
(2) 手動切換成主狀态
# /usr/share/heartbeat/hb_takeover
主伺服器 service heartbeat start 也可以将vip切回來。
(3) 通過日志檢視vip接管過程
節點1:
# tail -f /var/log/message
jan 12 12:46:30 app1 heartbeat: [4519]: info: app1 wants to go standby [all]
jan 12 12:46:30 app1 heartbeat: [4519]: info: standby: app2 can take our all resources
jan 12 12:46:30 app1 heartbeat: [6043]: info: give up all ha resources (standby).
jan 12 12:46:30 app1 resourcemanager(default)[6056]: info: releasing resource group: app1 ipaddr::192.168.0.26/24/eth0
jan 12 12:46:30 app1 resourcemanager(default)[6056]: info: running /etc/ha.d/resource.d/ipaddr 192.168.0.26/24/eth0 stop
jan 12 12:46:30 app1 ipaddr(ipaddr_192.168.0.26)[6119]: info: ip status = ok, ip_cip=
jan 12 12:46:30 app1 /usr/lib/ocf/resource.d//heartbeat/ipaddr(ipaddr_192.168.0.26)[6093]: info: success
jan 12 12:46:30 app1 heartbeat: [6043]: info: all ha resource release completed (standby).
jan 12 12:46:30 app1 heartbeat: [4519]: info: local standby process completed [all].
jan 12 12:46:31 app1 heartbeat: [4519]: warn: 1 lost packet(s) for [app2] [1036:1038]
jan 12 12:46:31 app1 heartbeat: [4519]: info: remote resource transition completed.
jan 12 12:46:31 app1 heartbeat: [4519]: info: no pkts missing from app2!
jan 12 12:46:31 app1 heartbeat: [4519]: info: other node completed standby takeover of all resources.
節點2:
[root@app2 ha.d]# tail -f /var/log/messages
jan 12 12:46:30 app2 heartbeat: [4325]: info: app1 wants to go standby [all]
jan 12 12:46:30 app2 heartbeat: [4325]: info: standby: acquire [all] resources from app1
jan 12 12:46:30 app2 heartbeat: [5459]: info: acquire all ha resources (standby).
jan 12 12:46:30 app2 resourcemanager(default)[5472]: info: acquiring resource group: app1 ipaddr::192.168.0.26/24/eth0
jan 12 12:46:30 app2 /usr/lib/ocf/resource.d//heartbeat/ipaddr(ipaddr_192.168.0.26)[5500]: info: resource is stopped
jan 12 12:46:30 app2 resourcemanager(default)[5472]: info: running /etc/ha.d/resource.d/ipaddr 192.168.0.26/24/eth0 start
jan 12 12:46:31 app2 ipaddr(ipaddr_192.168.0.26)[5625]: info: adding inet address 192.168.0.26/24 with broadcast address 192.168.0.255 to device eth0
jan 12 12:46:31 app2 ipaddr(ipaddr_192.168.0.26)[5625]: info: bringing device eth0 up
jan 12 12:46:31 app2 ipaddr(ipaddr_192.168.0.26)[5625]: info: /usr/libexec/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-192.168.0.26 eth0 192.168.0.26 auto not_used not_used
jan 12 12:46:31 app2 /usr/lib/ocf/resource.d//heartbeat/ipaddr(ipaddr_192.168.0.26)[5599]: info: success
jan 12 12:46:31 app2 heartbeat: [5459]: info: all ha resource acquisition completed (standby).
jan 12 12:46:31 app2 heartbeat: [4325]: info: standby resource acquisition done [all].
jan 12 12:46:31 app2 heartbeat: [4325]: info: remote resource transition completed.
手動添加vip位址指令:
/etc/ha.d/resource.d/ipaddr 192.168.0.27/24/eth0:2 start
(4) 檢視vip位址資訊,vip在主節點上。
[root@app1 ha.d]# ip a
1: lo: <loopback,up,lower_up> mtu 16436 qdisc noqueue state unknown
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <broadcast,multicast,up,lower_up> mtu 1500 qdisc pfifo_fast state up qlen 1000
link/ether 00:0c:29:4c:39:43 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.24/24 brd 192.168.0.255 scope global eth0
inet 192.168.0.26/24 brd 192.168.0.255 scope global secondary eth0:1
inet6 fe80::20c:29ff:fe4c:3943/64 scope link
3: eth1: <broadcast,multicast,up,lower_up> mtu 1500 qdisc pfifo_fast state up qlen 1000
link/ether 00:0c:29:4c:39:4d brd ff:ff:ff:ff:ff:ff
inet 10.10.10.24/24 brd 10.10.10.255 scope global eth1
inet6 fe80::20c:29ff:fe4c:394d/64 scope link
valid_lft forever preferred_lft forever
app1: /dev/sdb1
app2: /dev/sdb1
(1) 下載下傳drbd安裝包,下載下傳位址如下:
<a href="ftp://rpmfind.net/linux/atrpms/el6.5-x86_64/atrpms/stable/" target="_blank">ftp://rpmfind.net/linux/atrpms/el6.5-x86_64/atrpms/stable/</a>
# rpm -ivh drbd-8.4.3-33.el6.x86_64.rpm drbd-kmdl-2.6.32-431.el6-8.4.3-33.el6.x86_64.rpm
warning: drbd-8.4.3-33.el6.x86_64.rpm: header v4 dsa/sha1 signature, key id 66534c2b: nokey
1:drbd-kmdl-2.6.32-431.el########################################### [ 50%]
2:drbd ########################################### [100%]
#
(2) 加載drbd到核心子產品
app1,app2分别操作,并加入到/etc/rc.local檔案中,可以事先嘗試有無自動加載。
lsmode |grep drbd
modprobe drbd
[root@app1 ~]# vi /etc/drbd.d/global_common.conf
global {
usage-count no;
}
common {
protocol c;
disk {
on-io-error detach;
no-disk-flushes;
no-md-flushes;
}
net {
sndbuf-size 512k;
max-buffers 8000;
unplug-watermark 1024;
max-epoch-size 8000;
cram-hmac-alg "sha1";
shared-secret "hdhwxes23syehart8t";
after-sb-0pri disconnect;
after-sb-1pri disconnect;
after-sb-2pri disconnect;
rr-conflict disconnect;
syncer {
rate 300m;
al-extents 517;
}
resource data {
on app1 {
device /dev/drbd0;
disk /dev/sdb1;
address 192.168.1.120:7788;
meta-disk internal;
}
on app2 {
device /dev/drbd0;
disk /dev/sdb1;
address 192.168.1.121:7788;
下面是采用内部模式: 用于解決遷移問題,這個實驗一直沒有做成功,下次再試吧。
resource data {
address 192.168.0.24:7788;
meta-disk /dev/sdc1 [0];
address 192.168.0.25:7788;
meta-disk /dev/sdc1 [0];
在app1和app2上分别執行:
# drbdadm create-md data
writing meta data...
initializing activity log
not initializing bitmap
new drbd meta data block successfully created.
說明: 這一部會出現的問題如下:
# drbdadm create-md data
command 'drbdmeta 1 v08 /dev/sdb1 internal create-md' terminated with exit code 40
#解決如下,非要做如下dd操作,可能bug
# dd if=/dev/zero of=/dev/sdb1 bs=1m count=10
# sync
在app1和app2上分别執行:或采用 drbdadm up data
# service drbd start
starting drbd resources: [
create res: data
prepare disk: data
adjust disk: data
adjust net: data
]
..........
cat /proc/drbd #或者直接使用指令drbd-overview
節點1:
[root@app1 drbd.d]# cat /proc/drbd
version: 8.4.3 (api:1/proto:86-101)
git-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-11-29 12:28:00
0: cs:connected ro:secondary/secondary ds:inconsistent/inconsistent c r-----
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:20970828
[root@app1 drbd.d]#
節點2:
[root@app2 drbd.d]# cat /proc/drbd
[root@app2 drbd.d]#
我們需要将其中一個節點設定為primary,在要設定為primary的節點上執行如下指令:
drbdadm -- --overwrite-data-of-peer primary data
drbdadm primary --force data
主節點檢視同步狀态:
0: cs:syncsource ro:primary/secondary ds:uptodate/inconsistent c r---n-
ns:1440320 nr:0 dw:0 dr:1443488 al:0 bm:85 lo:0 pe:36 ua:3 ap:0 ep:1 wo:d oos:19566924
[>...................] sync'ed: 6.7% (19108/20476)m
finish: 0:24:03 speed: 13,536 (12,760) k/sec
備節點檢視同步狀态:
[root@app2 drbd.d]# cat /proc/drbd
0: cs:synctarget ro:secondary/primary ds:inconsistent/uptodate c r-----
ns:0 nr:2063360 dw:2030592 dr:0 al:0 bm:123 lo:33 pe:8 ua:32 ap:0 ep:1 wo:d oos:18940236
[>...................] sync'ed: 9.7% (18496/20476)m
finish: 0:23:54 speed: 13,196 (12,848) want: 3,240 k/sec
檢視同步狀态:
[root@app1 ~]# cat /proc/drbd
0: cs:connected ro:primary/secondary ds:uptodate/uptodate c r-----
ns:20970828 nr:0 dw:0 dr:20971500 al:0 bm:1280 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
[root@app1 ~]#
[root@app2 drbd.d]# cat /proc/drbd
0: cs:connected ro:secondary/primary ds:uptodate/uptodate c r-----
ns:0 nr:20970828 dw:20970828 dr:0 al:0 bm:1280 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
檔案系統的挂載隻能在primary節點進行,隻有在設定了主節點後才能對drbd裝置進行格式化, 格式化與手動挂載測試。
[root@app1 ~]# mkfs.ext4 /dev/drbd0
[root@app1 ~]# mount /dev/drbd0 /data
對主primary/secondary模型的drbd服務來講,在某個時刻隻能有一個節點為primary,是以,要切換兩個節點的角色,隻能在先将原有的primary節點設定為secondary後,才能原來的secondary節點設定為primary:
手工切換drbd的步驟:
(1) 主節點 umount /dev/drbd0 解除安裝挂載
(2) 主節點 drbdadm secondary all 恢複從節點
(3) 備節點 drbdadm primary all 配置主節點
(4) 備節點 mount /dev/drbd0 /data 挂載
當drbd出現腦裂後,會導緻drbd兩邊的磁盤不一緻,處理方法如下:
在确定要作為從的節點上切換成secondary,并放棄該資源的資料:
drbdadm secondary all
drbdadm -- --discard-my-data connect all
重新同步資料:
drbdadm -- --overwrite-data-of-peer primary data
# vi /etc/exports
/data 192.168.0.0/24(rw,no_root_squash)
# service rpcbind start
# service nfs start
# chkconfig rpcbind on
# chkconfig nfs on
# vi haresources
#app1 ipaddr::192.168.0.26/24/eth0:1
app1 ipaddr::192.168.0.26/24/eth0:1 drbddisk::data filesystem::/dev/drbd0::/data::ext4 nfs
參數說明:
ipaddr::192.168.0.26/24/eth0:1 #虛拟ip位址
drbddisk::data #管理drbd資源
filesystem::/dev/drbd0::/data::ext4 #挂載檔案系統
nfs #nfs腳本
# vi /etc/ha.d/resource.d/nfs
killall -9 nfsd
/etc/init.d/nfs restart
exit 0
# chmod +x /etc/ha.d/resource.d/nfs
1. 通過一台客戶機挂載
[root@vm15 ~]# mount -t nfs 192.168.0.26:/data/ /mnt
[root@vm15 ~]# df -h
filesystem size used avail use% mounted on
/dev/sda3 21g 4.6g 15g 24% /
/dev/sda1 99m 23m 72m 25% /boot
tmpfs 7.4g 0 7.4g 0% /dev/shm
/dev/mapper/vg-data 79g 71g 4.2g 95% /data
192.168.0.26:/data/ 9.9g 151m 9.2g 2% /mnt
[root@vm15 ~]#
2. 節點1上執行: service heartbeat stop 或 /usr/share/heartbeat/hb_standby
3. 觀察日志情況
資源主節點:
jan 22 15:46:01 app2 heartbeat: [8050]: info: app2 wants to go standby [all]
jan 22 15:46:01 app2 heartbeat: [8050]: info: standby: app1 can take our all resources
jan 22 15:46:01 app2 heartbeat: [9310]: info: give up all ha resources (standby).
jan 22 15:46:01 app2 resourcemanager(default)[9323]: info: releasing resource group: app1 ipaddr::192.168.0.26/24/eth0:1 drbddisk::data filesystem::/dev/drbd0::/data::ext4 nfs
jan 22 15:46:01 app2 resourcemanager(default)[9323]: info: running /etc/ha.d/resource.d/nfs stop
jan 22 15:46:01 app2 kernel: nfsd: last server has exited, flushing export cache
jan 22 15:46:01 app2 rpc.mountd[9218]: caught signal 15, un-registering and exiting.
jan 22 15:46:02 app2 rpc.mountd[9452]: version 1.2.3 starting
jan 22 15:46:02 app2 kernel: nfsd: using /var/lib/nfs/v4recovery as the nfsv4 state recovery directory
jan 22 15:46:02 app2 kernel: nfsd: starting 90-second grace period
jan 22 15:46:02 app2 resourcemanager(default)[9323]: info: running /etc/ha.d/resource.d/filesystem /dev/drbd0 /data ext4 stop
jan 22 15:46:02 app2 filesystem(filesystem_/dev/drbd0)[9519]: info: running stop for /dev/drbd0 on /data
jan 22 15:46:02 app2 filesystem(filesystem_/dev/drbd0)[9519]: info: trying to unmount /data
jan 22 15:46:02 app2 filesystem(filesystem_/dev/drbd0)[9519]: info: unmounted /data successfully
jan 22 15:46:02 app2 /usr/lib/ocf/resource.d//heartbeat/filesystem(filesystem_/dev/drbd0)[9511]: info: success
jan 22 15:46:02 app2 resourcemanager(default)[9323]: info: running /etc/ha.d/resource.d/drbddisk data stop
jan 22 15:46:02 app2 kernel: block drbd0: role( primary -> secondary )
jan 22 15:46:02 app2 kernel: block drbd0: bitmap write of 0 pages took 0 jiffies
jan 22 15:46:02 app2 kernel: block drbd0: 0 kb (0 bits) marked out-of-sync by on disk bit-map.
jan 22 15:46:02 app2 resourcemanager(default)[9323]: info: running /etc/ha.d/resource.d/ipaddr 192.168.0.26/24/eth0:1 stop
jan 22 15:46:02 app2 ipaddr(ipaddr_192.168.0.26)[9679]: info: ip status = ok, ip_cip=
jan 22 15:46:02 app2 /usr/lib/ocf/resource.d//heartbeat/ipaddr(ipaddr_192.168.0.26)[9653]: info: success
jan 22 15:46:02 app2 heartbeat: [9310]: info: all ha resource release completed (standby).
jan 22 15:46:02 app2 heartbeat: [8050]: info: local standby process completed [all].
jan 22 15:46:03 app2 kernel: block drbd0: peer( secondary -> primary )
jan 22 15:46:04 app2 heartbeat: [8050]: warn: 1 lost packet(s) for [app1] [5137:5139]
jan 22 15:46:04 app2 heartbeat: [8050]: info: remote resource transition completed.
jan 22 15:46:04 app2 heartbeat: [8050]: info: no pkts missing from app1!
jan 22 15:46:04 app2 heartbeat: [8050]: info: other node completed standby takeover of all resources.
資源從節點:
jan 22 15:46:02 app1 heartbeat: [8622]: info: app2 wants to go standby [all]
jan 22 15:46:03 app1 kernel: block drbd0: peer( primary -> secondary )
jan 22 15:46:04 app1 heartbeat: [8622]: info: standby: acquire [all] resources from app2
jan 22 15:46:04 app1 heartbeat: [9131]: info: acquire all ha resources (standby).
jan 22 15:46:04 app1 resourcemanager(default)[9144]: info: acquiring resource group: app1 ipaddr::192.168.0.26/24/eth0:1 drbddisk::data filesystem::/dev/drbd0::/data::ext4 nfs
jan 22 15:46:04 app1 /usr/lib/ocf/resource.d//heartbeat/ipaddr(ipaddr_192.168.0.26)[9172]: info: resource is stopped
jan 22 15:46:04 app1 resourcemanager(default)[9144]: info: running /etc/ha.d/resource.d/ipaddr 192.168.0.26/24/eth0:1 start
jan 22 15:46:04 app1 ipaddr(ipaddr_192.168.0.26)[9303]: info: adding inet address 192.168.0.26/24 with broadcast address 192.168.0.255 to device eth0 (with label eth0:1)
jan 22 15:46:04 app1 ipaddr(ipaddr_192.168.0.26)[9303]: info: bringing device eth0 up
jan 22 15:46:04 app1 ipaddr(ipaddr_192.168.0.26)[9303]: info: /usr/libexec/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-192.168.0.26 eth0 192.168.0.26 auto not_used not_used
jan 22 15:46:04 app1 /usr/lib/ocf/resource.d//heartbeat/ipaddr(ipaddr_192.168.0.26)[9277]: info: success
jan 22 15:46:04 app1 resourcemanager(default)[9144]: info: running /etc/ha.d/resource.d/drbddisk data start
jan 22 15:46:04 app1 kernel: block drbd0: role( secondary -> primary )
jan 22 15:46:04 app1 /usr/lib/ocf/resource.d//heartbeat/filesystem(filesystem_/dev/drbd0)[9439]: info: resource is stopped
jan 22 15:46:04 app1 resourcemanager(default)[9144]: info: running /etc/ha.d/resource.d/filesystem /dev/drbd0 /data ext4 start
jan 22 15:46:04 app1 filesystem(filesystem_/dev/drbd0)[9529]: info: running start for /dev/drbd0 on /data
jan 22 15:46:04 app1 kernel: ext4-fs (drbd0): mounted filesystem with ordered data mode. opts:
jan 22 15:46:04 app1 /usr/lib/ocf/resource.d//heartbeat/filesystem(filesystem_/dev/drbd0)[9518]: info: success
jan 22 15:46:04 app1 kernel: nfsd: last server has exited, flushing export cache
jan 22 15:46:05 app1 rpc.mountd[9050]: caught signal 15, un-registering and exiting.
jan 22 15:46:05 app1 rpc.mountd[9698]: version 1.2.3 starting
jan 22 15:46:05 app1 kernel: nfsd: using /var/lib/nfs/v4recovery as the nfsv4 state recovery directory
jan 22 15:46:05 app1 kernel: nfsd: starting 90-second grace period
jan 22 15:46:05 app1 heartbeat: [9131]: info: all ha resource acquisition completed (standby).
jan 22 15:46:05 app1 heartbeat: [8622]: info: standby resource acquisition done [all].
jan 22 15:46:05 app1 heartbeat: [8622]: info: remote resource transition completed.
[root@app1 resource.d]# ip a
1: lo: <loopback,up,lower_up> mtu 16436 qdisc noqueue state unknown
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <broadcast,multicast,up,lower_up> mtu 1500 qdisc mq state up qlen 1000
link/ether 00:0c:29:03:c8:10 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.24/24 brd 192.168.0.255 scope global eth0
inet6 fe80::20c:29ff:fe03:c810/64 scope link
3: eth1: <broadcast,multicast,up,lower_up> mtu 1500 qdisc mq state up qlen 1000
link/ether 00:0c:29:03:c8:1a brd ff:ff:ff:ff:ff:ff
inet 10.10.10.24/24 brd 10.10.10.255 scope global eth1
inet6 fe80::20c:29ff:fe03:c81a/64 scope link
valid_lft forever preferred_lft forever
[root@app1 resource.d]# cat /proc/drbd
ns:628 nr:16 dw:644 dr:2968 al:5 bm:7 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
[root@app1 resource.d]# df -h
filesystem size used avail use% mounted on
/dev/mapper/vg_app1-lv_root 36g 3.7g 30g 11% /
tmpfs 1004m 68k 1004m 1% /dev/shm
/dev/sda1 485m 39m 421m 9% /boot
/dev/drbd0 9.9g 151m 9.2g 2% /data
[root@app1 resource.d]#
heartbeat+drbd+方案可以實作nfs,mysql等比較精典的方案,實作方式類似,圍繞着heartbeat、drbd還有很多的基于主備實用方案。
如果用于生産環境确實還需要對drbd進行很好的監控,以及加強對drbd相對技術的進一步測試與實作,加深對drbd的了解。
下一步繼續測試drbd資料遷移、基于heartbeat+共享存儲、雙主等熱備方案都有很大實用價值。