大綱
一、什麼是Keepalived
二、Keepalived工作原理
三、Keepalived配置過程
Keepalived是用C寫的簡單的一個路由軟體,這個項目的主要目标是對Linux系統和基于Linux的基礎設施提供簡單而強大負載均衡和高可用性。負載均衡架構依賴于衆所周知的和廣泛使用的Linux虛拟伺服器(IPVS)核心子產品提供第四層負載均衡。另一方面,高可用性是通過VRRP協定實作。
Keepalived的作用是檢測web伺服器的狀态,如果有一台web伺服器當機,或工作出現故障,Keepalived将檢測到,并将有故障的web伺服器從系統中剔除,當web伺服器工作正常後Keepalived自動将web伺服器加入到伺服器群中,這些工作全部自動完成,不需要人工幹涉,需要人工做的隻是修複故障的web伺服器。
Layer3,4&7工作在IP/TCP協定棧的IP層,TCP層,及應用層,原理分别如下:
- Layer3:Keepalived使用Layer3的方式工作式時,Keepalived會定期向伺服器群中的伺服器發送一個ICMP的資料包(既我們平時用的Ping程式),如果發現某台服務的IP位址沒有激活,Keepalived便報告這台伺服器失效,并将它從伺服器群中剔除,這種情況的典型例子是某台伺服器被非法關機。Layer3的方式是以伺服器的IP位址是否有效作為伺服器工作正常與否的标準。
- Layer4:如果您了解了Layer3的方式,Layer4就容易了。Layer4主要以TCP端口的狀态來決定伺服器工作正常與否。如web server的服務端口一般是80,如果Keepalived檢測到80端口沒有啟動,則Keepalived将把這台伺服器從伺服器群中剔除。
- Layer7:Layer7就是工作在具體的應用層了,比Layer3,Layer4要複雜一點,在網絡上占用的帶寬也要大一些。Keepalived将根據使用者的設定檢查伺服器程式的運作是否正常,如果與使用者的設定不相符,則Keepalived将把伺服器從伺服器群中剔除。
Software Design

系統環境
CentOS5.8 x86_64
Director
Master 172.16.1.101
Slave 172.16.1.105
RealServer
node1.network.com node1 172.16.1.103
node2.network.com node2 172.16.1.104
軟體包
- ipvsadm-1.24-13.el5.x86_64.rpm
- keepalived-1.2.1-5.el5.x86_64.rpm
- httpd-2.2.15-47.el6.centos.1.x86_64.rpm
拓撲圖
1、時間同步
[root@Master ~]# ntpdate s2c.time.edu.cn
[root@Slave ~]# ntpdate s2c.time.edu.cn
[root@node1 ~]# ntpdate s2c.time.edu.cn
[root@node2 ~]# ntpdate s2c.time.edu.cn
可根據需要在每個節點上定義crontab任務
[root@Master ~]# which ntpdate
/sbin/ntpdate
[root@Master ~]# echo "*/5 * * * * /sbin/ntpdate s2c.time.edu.cn &> /dev/null" >> /var/spool/cron/root
[root@Master ~]# crontab -l
*/5 * * * * /sbin/ntpdate s2c.time.edu.cn &> /dev/null
2、主機名稱要與uname -n保持一緻,并通過/etc/hosts解析
Master
[root@Master ~]# hostname Master
[root@Master ~]# uname -n
Master
[root@Master ~]# sed -i 's@\(HOSTNAME=\).*@\1Master@g' /etc/sysconfig/network
Slave
[root@Slave ~]# hostname Slave
[root@Slave ~]# uname -n
Slave
[root@node2 ~]# sed -i 's@\(HOSTNAME=\).*@\1Slave@g' /etc/sysconfig/network
node1
[root@node1 ~]# hostname node1.network.com
[root@node1 ~]# uname -n
node3.network.com
[root@node1 ~]# sed -i 's@\(HOSTNAME=\).*@\1node1.network.com@g' /etc/sysconfig/network
node2
[root@node2 ~]# hostname node2.network.com
[root@node2 ~]# uname -n
node2.network.com
[root@node2 ~]# sed -i 's@\(HOSTNAME=\).*@\1node2.network.com@g' /etc/sysconfig/network
node1添加hosts解析
[root@Master ~]# vim /etc/hosts
[root@Master ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 CentOS5.8 CentOS5 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
172.16.1.101 Master
172.16.1.105 Slave
172.16.1.103 node1.network.com node1
172.16.1.104 node2.network.com node2
拷貝此hosts檔案至Slave
[root@Master ~]# scp /etc/hosts Slave:/etc/
The authenticity of host 'Slave (172.16.1.105)' can't be established.
RSA key fingerprint is 13:42:92:7b:ff:61:d8:f3:7c:97:5f:22:f6:71:b3:24.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'Slave' (RSA) to the list of known hosts.
hosts 100% 328 0.3KB/s 00:00
拷貝此hosts檔案至node1
[root@Master ~]# scp /etc/hosts node1:/etc/
The authenticity of host 'node1 (172.16.1.103)' can't be established.
RSA key fingerprint is 1e:87:cd:f0:95:ff:a8:ef:19:bc:c6:e7:0a:87:6b:fa.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node1,172.16.1.103' (RSA) to the list of known hosts.
root@node1's password:
hosts 100% 328 0.3KB/s 00:00
拷貝此hosts檔案至node2
[root@Master ~]# scp /etc/hosts node2:/etc/
The authenticity of host 'node2 (172.16.1.104)' can't be established.
RSA key fingerprint is 1e:87:cd:f0:95:ff:a8:ef:19:bc:c6:e7:0a:87:6b:fa.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node2,172.16.1.104' (RSA) to the list of known hosts.
root@node2's password:
hosts 100% 328 0.3KB/s 00:00
3、關閉iptables和selinux
Master
[root@Master ~]# service iptables stop
[root@Master ~]# vim /etc/sysconfig/selinux
[root@Master ~]# cat /etc/sysconfig/selinux
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - SELinux is fully disabled.
#SELINUX=permissive
SELINUX=disabled
# SELINUXTYPE= type of policy in use. Possible values are:
# targeted - Only targeted network daemons are protected.
# strict - Full SELinux protection.
SELINUXTYPE=targeted
Slave
[root@Slave ~]# service iptables stop
[root@Slave ~]# vim /etc/sysconfig/selinux
[root@Slave ~]# cat /etc/sysconfig/selinux
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - SELinux is fully disabled.
#SELINUX=permissive
SELINUX=disabled
# SELINUXTYPE= type of policy in use. Possible values are:
# targeted - Only targeted network daemons are protected.
# strict - Full SELinux protection.
SELINUXTYPE=targeted
node1
[root@node1 ~]# service iptables stop
[root@node1 ~]# vim /etc/sysconfig/selinux
[root@node1 ~]# cat /etc/sysconfig/selinux
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - SELinux is fully disabled.
#SELINUX=permissive
SELINUX=disabled
# SELINUXTYPE= type of policy in use. Possible values are:
# targeted - Only targeted network daemons are protected.
# strict - Full SELinux protection.
SELINUXTYPE=targeted
node2
[root@node2 ~]# service iptables stop
[root@node2 ~]# vim /etc/sysconfig/selinux
[root@node2 ~]# cat /etc/sysconfig/selinux
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - SELinux is fully disabled.
#SELINUX=permissive
SELINUX=disabled
# SELINUXTYPE= type of policy in use. Possible values are:
# targeted - Only targeted network daemons are protected.
# strict - Full SELinux protection.
SELINUXTYPE=targeted
4、配置node1和node2
首先安裝httpd服務
[root@node1 ~]# yum install -y httpd
提供測試頁面
[root@node1 ~]# echo "<h1>node1.network.com</h1>" > /var/www/html/index.html
[root@node1 ~]# cat /var/www/html/index.html
<h1>node1.network.com</h1>
啟動httpd服務
[root@node1 ~]# service httpd start
Starting httpd: [ OK ]
建立一個腳本,用于配置node1
[root@node1 ~]# vim RealServer.sh
[root@node1 ~]# cat RealServer.sh
#!/bin/bash
#
# Script to start LVS DR real server.
# chkconfig: - 90 10
# description: LVS DR real server
#
. /etc/rc.d/init.d/functions
VIP=172.16.1.110
host=`/bin/hostname`
case "$1" in
start)
# Start LVS-DR real server on this machine.
/sbin/ifconfig lo down
/sbin/ifconfig lo up
echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
/sbin/ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 up
/sbin/route add -host $VIP dev lo:0
;;
stop)
# Stop LVS-DR real server loopback device(s).
/sbin/ifconfig lo:0 down
echo 0 > /proc/sys/net/ipv4/conf/eth0/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/eth0/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
;;
status)
# Status of LVS-DR real server.
islothere=`/sbin/ifconfig lo:0 | grep $VIP`
isrothere=`netstat -rn | grep "lo:0" | grep $VIP`
if [ ! "$islothere" -o ! "isrothere" ];then
# Either the route or the lo:0 device
# not found.
echo "LVS-DR real server Stopped."
else
echo "LVS-DR real server Running."
fi
;;
*)
# Invalid entry.
echo "$0: Usage: $0 {start|status|stop}"
exit 1
;;
esac
給此腳本添加執行權限并執行
[root@node1 ~]# chmod +x RealServer.sh
[root@node1 ~]# ./RealServer.sh start
檢視四個核心參數、路由條目以及VIP是否配置上
[root@node1 ~]# cat /proc/sys/net/ipv4/conf/eth0/arp_ignore
1
[root@node1 ~]# cat /proc/sys/net/ipv4/conf/all/arp_ignore
1
[root@node1 ~]# cat /proc/sys/net/ipv4/conf/all/arp_announce
2
[root@node1 ~]# cat /proc/sys/net/ipv4/conf/eth0/arp_announce
2
[root@node1 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
172.16.1.110 0.0.0.0 255.255.255.255 UH 0 0 0 lo
172.16.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0
0.0.0.0 172.16.1.1 0.0.0.0 UG 0 0 0 eth0
[root@node1 ~]# ifconfig lo:0
lo:0 Link encap:Local Loopback
inet addr:172.16.1.110 Mask:255.255.255.255
UP LOOPBACK RUNNING MTU:16436 Metric:1
node1的配置完畢,node2是一樣的,這裡就不再示範了
5、Master與Slave安裝keepalived與ipvsadm
Master
[root@Master ~]# wget http://techdata.mirror.gtcomm.net/sysadmin/keepalived/keepalived-1.2.1-5.el5.x86_64.rpm
[root@Master ~]# yum install --nogpgcheck -y keepalived-1.2.1-5.el5.x86_64.rpm ipvsadm
[root@Master ~]# scp keepalived-1.2.1-5.el5.x86_64.rpm Slave:~
keepalived-1.2.1-5.el5.x86_64.rpm 100% 163KB 163.4KB/s 00:00
Slave
[root@Master ~]# yum install --nogpgcheck -y keepalived-1.2.1-5.el5.x86_64.rpm ipvsadm
6、編輯keepalived的主配置檔案
[root@Master ~]# cd /etc/keepalived/
[root@Master keepalived]# ls
keepalived.conf
[root@Master keepalived]# cp keepalived.conf{,.back}
[root@master ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost # 指定通知的郵箱位址
}
notification_email_from root #指定通知郵件時由誰發的
smtp_server 127.0.0.1 # 指定smtp郵件伺服器位址
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state MASTER # 指定為主節點
interface eth0 # 指定接口
virtual_router_id 51 # 指定虛拟路由辨別符
priority 100 # 指定優先級,注意,主的一定要比從的優先級數字大
advert_int 1 # 指定同步時間間隔
authentication {
auth_type PASS
auth_pass soysauce # 指定認證密碼
}
virtual_ipaddress {
172.16.1.110/16 dev eth0 label eth0:0 # 指定VIP
}
}
virtual_server 172.16.1.110 80 {
delay_loop 6
lb_algo rr # 指定排程算法
lb_kind DR # 指定LVS類型
nat_mask 255.255.0.0 # 指定掩碼
#persistence_timeout 50
protocol TCP
real_server 172.16.1.103 80 {
weight 1
HTTP_GET { # 指定健康狀态檢測方法
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
real_server 172.16.1.104 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
}
将此配置檔案拷貝至Slave上
[root@Master keepalived]# scp keepalived.conf Slave:/etc/keepalived/
keepalived.conf 100% 1166 1.1KB/s 00:00
7、配置從節點的配置檔案
[root@Slave ~]# vim /etc/keepalived/keepalived.conf
[root@Slave ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state BACKUP # 這裡改為BACKUP
interface eth0
virtual_router_id 51
priority 99 # 修改優先級,一定要比主節點小
advert_int 1
authentication {
auth_type PASS
auth_pass soysauce
}
virtual_ipaddress {
172.16.1.110/16 dev eth0 label eth0:0
}
}
virtual_server 172.16.1.110 80 {
delay_loop 6
lb_algo rr
lb_kind DR
nat_mask 255.255.0.0
#persistence_timeout 50
protocol TCP
real_server 172.16.1.103 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
real_server 172.16.1.104 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
}
8、啟動keepalived服務,提供ipvs高可用
Master
[root@Master keepalived]# service keepalived start
Starting keepalived: [ OK ]
Slave
[root@Slave keepalived]# service keepalived start
Starting keepalived: [ OK ]
在Master節點上檢視ipvs規則是否生效
[root@Master keepalived]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.16.1.110:80 rr
-> 172.16.1.104:80 Route 1 0 0
-> 172.16.1.103:80 Route 1 0 0
檢視vip位址是否生效
[root@Master keepalived]# ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0c:29:fe:82:38 brd ff:ff:ff:ff:ff:ff
inet 172.16.1.101/24 brd 255.255.255.255 scope global eth0
inet 172.16.1.110/16 scope global eth0:0
inet6 fe80::20c:29ff:fefe:8238/64 scope link
valid_lft forever preferred_lft forever
浏覽器通路一下
再重新整理一下
可以看到,rr排程算法生效
9、增加Keepalived的維護模式切換功能
要想啟用keepalived的維護模式切換功能,隻需要在配置檔案中定義一個vrrp_script和track_script即可
如
vrrp_script chk_maintainace {
script "[[ -e /etc/keepalived/down ]] && exit 1 || exit 0"
interval 1
weight -2
}
track_script {
chk_maintainace
}
vrrp_script放在vrrp執行個體之外
track_script放在vrrp執行個體之内
配置檔案樣例
[root@Master keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_script chk_maintainace { # 定義vrrp_script腳本
script "[[ -e /etc/keepalived/down ]] && exit 1 || exit 0"
interval 1
weight -2
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass soysauce
}
virtual_ipaddress {
172.16.1.110/16 dev eth0 label eth0:0
}
track_script { # 定義track_script腳本
chk_maintainace
}
}
virtual_server 172.16.1.110 80 {
delay_loop 6
lb_algo rr
lb_kind DR
nat_mask 255.255.0.0
#persistence_timeout 50
protocol TCP
real_server 172.16.1.103 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
real_server 172.16.1.104 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
}
修改完配置檔案之後,手動同步至從節點,并修改從節點上配置檔案state為BACKUP,priority為99
修改完成之後,重新開機兩個節點上的keepalived服務
先檢視Master上的vip是已經生效的
[root@Master keepalived]# ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0c:29:fe:82:38 brd ff:ff:ff:ff:ff:ff
inet 172.16.1.101/24 brd 255.255.255.255 scope global eth0
inet 172.16.1.110/16 scope global eth0:0
inet6 fe80::20c:29ff:fefe:8238/64 scope link
valid_lft forever preferred_lft forever
此時建立一個空檔案down
[root@Master keepalived]# touch down
再次檢視ip位址,發現已經轉移至從節點了
[root@Master keepalived]# ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0c:29:fe:82:38 brd ff:ff:ff:ff:ff:ff
inet 172.16.1.101/24 brd 255.255.255.255 scope global eth0
inet6 fe80::20c:29ff:fefe:8238/64 scope link
valid_lft forever preferred_lft forever
再删除down檔案并檢視vip是否轉移回來
[root@Master keepalived]# rm -f down
[root@Master keepalived]# ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0c:29:fe:82:38 brd ff:ff:ff:ff:ff:ff
inet 172.16.1.101/24 brd 255.255.255.255 scope global eth0
inet 172.16.1.110/16 scope global eth0:0
inet6 fe80::20c:29ff:fefe:8238/64 scope link
valid_lft forever preferred_lft forever
因為Master優先級為100,Slave優先級為99,是以當Master上線時,資源會自動轉移回來
10、增加Keepalived的主從切換郵件通知功能
提供一個腳本來實作
[root@Master ~]# cd /etc/keepalived/
[root@Master keepalived]# vim notify.sh
[root@Master keepalived]# cat notify.sh
#!/bin/bash
# Author: MageEdu <[email protected]>
# description: An example of notify script
#
vip=172.16.1.110
contact='root@localhost'
notify() {
mailsubject="`hostname` to be $1: $vip floating"
mailbody="`date '+%F %H:%M:%S'`: vrrp transition, `hostname` changed to be $1"
echo $mailbody | mail -s "$mailsubject" $contact
}
case "$1" in
master)
notify master
exit 0
;;
backup)
notify backup
exit 0
;;
fault)
notify fault
exit 0
;;
*)
echo 'Usage: `basename $0` {master|backup|fault}'
exit 1
;;
esac
[root@Master keepalived]# chmod +x notify.sh
添加在配置檔案中
[root@Master keepalived]# vim keepalived.conf
[root@Master keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_script chk_maintainace {
script "[[ -e /etc/keepalived/down ]] && exit 1 || exit 0"
interval 1
weight -2
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass soysauce
}
virtual_ipaddress {
172.16.1.110/16 dev eth0 label eth0:0
}
track_script {
chk_maintainace
}
notify_master "/etc/keepalived/notify.sh master" # 增加這三行
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}
virtual_server 172.16.1.110 80 {
delay_loop 6
lb_algo rr
lb_kind DR
nat_mask 255.255.0.0
#persistence_timeout 50
protocol TCP
real_server 172.16.1.103 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
real_server 172.16.1.104 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 2
nb_get_retry 3
delay_before_retry 1
}
}
}
然後将notif.sh腳本複制至Slave的/etc/keepalived目錄下,增加執行權限
然後在其配置檔案中增加三行,增加完成之後兩個節點重新開機keepalived服務即可
11、增加對nginx的高可用
通過一個vrrp_script和track_script來實作
如
[root@Master keepalived]# vim keepalived.conf
[root@Master keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
#vrrp_script chk_maintainace {
# script "[[ -e /etc/keepalived/down ]] && exit 1 || exit 0"
# interval 1
# weight -2
#}
vrrp_script chk_nginx {
script "killall -0 nginx"
interval 1
weight -2
fail 2
rise 1
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass soysauce
}
virtual_ipaddress {
172.16.1.110/16 dev eth0 label eth0:0
}
track_script {
# chk_maintainace
chk_nginx
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}
#virtual_server 172.16.1.110 80 {
# delay_loop 6
# lb_algo rr
# lb_kind DR
# nat_mask 255.255.0.0
# #persistence_timeout 50
# protocol TCP
# real_server 172.16.1.103 80 {
# weight 1
# HTTP_GET {
# url {
# path /
# status_code 200
# }
# connect_timeout 2
# nb_get_retry 3
# delay_before_retry 1
# }
# }
# real_server 172.16.1.104 80 {
# weight 1
# HTTP_GET {
# url {
# path /
# status_code 200
# }
# connect_timeout 2
# nb_get_retry 3
# delay_before_retry 1
# }
# }
#}
然後修改notify.sh腳本
[root@Master keepalived]# vim notify.sh
[root@Master keepalived]# cat notify.sh
#!/bin/bash
# Author: MageEdu <[email protected]>
# description: An example of notify script
#
vip=172.16.1.110
contact='root@localhost'
notify() {
mailsubject="`hostname` to be $1: $vip floating"
mailbody="`date '+%F %H:%M:%S'`: vrrp transition, `hostname` changed to be $1"
echo $mailbody | mail -s "$mailsubject" $contact
}
case "$1" in
master)
notify master
/etc/rc.d/init.d/nginx start
exit 0
;;
backup)
notify backup
/etc/rc.d/init.d/nginx restart
exit 0
;;
fault)
notify fault
/etc/rc.d/init.d/nginx stop
exit 0
;;
*)
echo 'Usage: `basename $0` {master|backup|fault}'
exit 1
;;
esac
修改完成之後,手動同步配置檔案和notify.sh至從節點,并修改從節點上配置檔案state和priority
修改完成之後,重新開機兩個節點上的keepalived服務
12、實作基于多虛拟路由的master/master模型
配置兩個虛拟路由即可,雙方互為主從
[root@Master keepalived]# vim keepalived.conf
[root@Master keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
#vrrp_script chk_maintainace {
# script "[[ -e /etc/keepalived/down ]] && exit 1 || exit 0"
# interval 1
# weight -2
#}
vrrp_script chk_nginx {
script "killall -0 nginx"
interval 1
weight -3
fail 2
rise 1
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass soysauce
}
virtual_ipaddress {
172.16.1.110/16 dev eth0 label eth0:0
}
track_script {
# chk_maintainace
chk_nginx
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}
vrrp_instance VI_2 {
state BACKUP
interface eth0
virtual_router_id 52
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass network
}
virtual_ipaddress {
172.16.1.120/16 dev eth0 label eth0:1
}
track_script {
# chk_maintainace
chk_nginx
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}
#virtual_server 172.16.1.110 80 {
# delay_loop 6
# lb_algo rr
# lb_kind DR
# nat_mask 255.255.0.0
# #persistence_timeout 50
# protocol TCP
# real_server 172.16.1.103 80 {
# weight 1
# HTTP_GET {
# url {
# path /
# status_code 200
# }
# connect_timeout 2
# nb_get_retry 3
# delay_before_retry 1
# }
# }
# real_server 172.16.1.104 80 {
# weight 1
# HTTP_GET {
# url {
# path /
# status_code 200
# }
# connect_timeout 2
# nb_get_retry 3
# delay_before_retry 1
# }
# }
#}
再來修改從節點的配置檔案
[root@Slave keepalived]# vim keepalived.conf
You have new mail in /var/spool/mail/root
[root@Slave keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
#vrrp_script chk_maintainace {
# script "[[ -e /etc/keepalived/down ]] && exit 1 || exit 0"
# interval 1
# weight -2
#}
vrrp_script chk_nginx {
script "killall -0 nginx"
interval 1
weight -3
fail 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass soysauce
}
virtual_ipaddress {
172.16.1.110/16 dev eth0 label eth0:0
}
track_script {
# chk_maintainace
chk_nginx
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}
vrrp_instance VI_2 {
state MASTER
interface eth0
virtual_router_id 52
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass network
}
virtual_ipaddress {
172.16.1.120/16 dev eth0 label eth0:1
}
track_script {
# chk_maintainace
chk_nginx
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}
#virtual_server 172.16.1.110 80 {
# delay_loop 6
# lb_algo rr
# lb_kind DR
# nat_mask 255.255.0.0
# #persistence_timeout 50
# protocol TCP
# real_server 172.16.1.103 80 {
# weight 1
# HTTP_GET {
# url {
# path /
# status_code 200
# }
# connect_timeout 2
# nb_get_retry 3
# delay_before_retry 1
# }
# }
# real_server 172.16.1.104 80 {
# weight 1
# HTTP_GET {
# url {
# path /
# status_code 200
# }
# connect_timeout 2
# nb_get_retry 3
# delay_before_retry 1
# }
# }
#}
重新開機兩節點的keepalived服務
[root@Master keepalived]# service keepalived restart
Stopping keepalived: [ OK ]
Starting keepalived: [ OK ]
[root@Slave keepalived]# service keepalived restart
Stopping keepalived: [ OK ]
Starting keepalived: [ OK ]
此時檢視兩個節點的vip是否生效
[root@Master keepalived]# ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0c:29:fe:82:38 brd ff:ff:ff:ff:ff:ff
inet 172.16.1.101/24 brd 255.255.255.255 scope global eth0
inet 172.16.1.110/16 scope global eth0:0
inet6 fe80::20c:29ff:fefe:8238/64 scope link
valid_lft forever preferred_lft forever
[root@Slave ~]# ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0c:29:66:34:d1 brd ff:ff:ff:ff:ff:ff
inet 172.16.1.105/24 brd 255.255.255.255 scope global eth0
inet 172.16.1.120/16 scope global eth0:1
inet6 fe80::20c:29ff:fe66:34d1/64 scope link
valid_lft forever preferred_lft forever
此時使某一個節點downdiao
[root@Master keepalived]# service keepalived stop
Stopping keepalived: [ OK ]
可以發現down掉的節點的vip轉移至另一個節點,另一個節點有了兩個vip
[root@Master keepalived]# ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0c:29:fe:82:38 brd ff:ff:ff:ff:ff:ff
inet 172.16.1.101/24 brd 255.255.255.255 scope global eth0
inet6 fe80::20c:29ff:fefe:8238/64 scope link
valid_lft forever preferred_lft forever
[root@Slave ~]# ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0c:29:66:34:d1 brd ff:ff:ff:ff:ff:ff
inet 172.16.1.105/24 brd 255.255.255.255 scope global eth0
inet 172.16.1.120/16 scope global eth0:1
inet 172.16.1.110/16 scope global secondary eth0:0
inet6 fe80::20c:29ff:fe66:34d1/64 scope link
valid_lft forever preferred_lft forever
再讓down掉的那個節點重新上線
[root@Master keepalived]# service keepalived start
Starting keepalived: [ OK ]
[root@Slave keepalived]# ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0c:29:66:34:d1 brd ff:ff:ff:ff:ff:ff
inet 172.16.1.105/24 brd 255.255.255.255 scope global eth0
inet 172.16.1.120/16 scope global eth0:1
inet6 fe80::20c:29ff:fe66:34d1/64 scope link
valid_lft forever preferred_lft forever
可以看到位址又轉移回來了
[root@Master keepalived]# ip addr show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:0c:29:fe:82:38 brd ff:ff:ff:ff:ff:ff
inet 172.16.1.101/24 brd 255.255.255.255 scope global eth0
inet 172.16.1.110/16 scope global eth0:0
inet6 fe80::20c:29ff:fefe:8238/64 scope link
valid_lft forever preferred_lft forever