一、MMM簡介:
MMM即Multi-Master Replication Manager for MySQL:mysql多主複制管理器,基于perl實作,關于mysql主主複制配置的監控、故障轉移和管理的一套可伸縮的腳本套件(在任何時候隻有一個節點可以被寫入),MMM也能對從伺服器進行讀負載均衡,是以可以用它來在一組用于複制的伺服器啟動虛拟ip,除此之外,它還有實作資料備份、節點之間重新同步功能的腳本。MySQL本身沒有提供replication failover的解決方案,通過MMM方案能實作伺服器的故障轉移,進而實作mysql的高可用。MMM不僅能提供浮動IP的功能,如果目前的主伺服器挂掉後,會将你後端的從伺服器自動轉向新的主伺服器進行同步複制,不用手工更改同步配置。這個方案是目前比較成熟的解決方案。詳情請看官網:http://mysql-mmm.org

優點:高可用性,擴充性好,出現故障自動切換,對于主主同步,在同一時間隻提供一台資料庫寫操作,保證的資料的一緻性。當主伺服器挂掉以後,另一個主立即接管,其他的從伺服器能自動切換,不用人工幹預。
缺點:monitor節點是單點,不過這個你也可以結合keepalived或者haertbeat做成高可用;至少三個節點,對主機的數量有要求,需要實作讀寫分離,還需要在前端編寫讀寫分離程式。在讀寫非常繁忙的業務系統下表現不是很穩定,可能會出現複制延時、切換失效等問題。MMM方案并不太适應于對資料安全性要求很高,并且讀、寫繁忙的環境中。
适用場景:
MMM的适用場景為資料庫通路量大,并且能實作讀寫分離的場景。
Mmm主要功能由下面三個腳本提供:
mmm_mond 負責所有的監控工作的監控守護程序,決定節點的移除(mmm_mond程序定時心跳檢測,失敗則将write ip浮動到另外一台master)等等
mmm_agentd 運作在mysql伺服器上的代理守護程序,通過簡單遠端服務集提供給監控節點
mmm_control 通過指令行管理mmm_mond程序
在整個監管過程中,需要在mysql中添加相關授權使用者,授權的使用者包括一個mmm_monitor使用者和一個mmm_agent使用者,如果想使用mmm的備份工具則還要添加一個mmm_tools使用者。
二、部署實施
1、環境介紹
OS:centos7.2(64位)資料庫系統:mysql5.7.13
關閉selinux
配置ntp,同步時間
角色 | IP | hostname | Server-id | Write vip | Read vip |
Master1 | 192.168.31.83 | master1 | 1 | 192.168.31.2 | |
Master2(backup) | 192.168.31.141 | master2 | 2 | 192.168.31.3 | |
Slave1 | 192.168.31.250 | slave1 | 3 | 192.168.31.4 | |
Slave2 | 192.168.31.225 | slave2 | 4 | 192.168.31.5 | |
monitor | 192.168.31.106 | monitor1 | 無 |
2、在所有主機上配置/etc/hosts檔案,添加如下内容:
192.168.31.83 master1
192.168.31.141 master2
192.168.31.250 slave1
192.168.31.225 slave2
192.168.31.106 monitor1
在所有主機上安裝perl、perl-develperl-CPAN libart_lgpl.x86_64 rrdtool.x86_64 rrdtool-perl.x86_64包
#yum -y install perl-* libart_lgpl.x86_64 rrdtool.x86_64 rrdtool-perl.x86_64
注:使用centos7線上yum源安裝
安裝perl的相關庫
#cpan -i Algorithm::Diff Class::Singleton DBI DBD::mysql Log::Dispatch Log::Log4perl Mail::Send Net::Ping Proc::Daemon Time::HiRes Params::Validate Net::ARP
3、在master1、master2、slave1、slave2主機上安裝mysql5.7和配置複制
master1和master2互為主從,slave1、slave2為master1的從
在每個mysql的配置檔案/etc/my.cnf中加入以下内容, 注意server_id不能重複。
master1主機:
log-bin = mysql-bin
binlog_format = mixed
server-id = 1
relay-log = relay-bin
relay-log-index = slave-relay-bin.index
log-slave-updates = 1
auto-increment-increment = 2
auto-increment-offset = 1
master2主機:
log-bin = mysql-bin
binlog_format = mixed
server-id = 2
relay-log = relay-bin
relay-log-index = slave-relay-bin.index
log-slave-updates = 1
auto-increment-increment = 2
auto-increment-offset = 2
slave1主機:
server-id = 3
relay-log = relay-bin
relay-log-index = slave-relay-bin.index
read_only = 1
slave2主機:
server-id = 4
relay-log = relay-bin
relay-log-index = slave-relay-bin.index
read_only = 1
在完成了對my.cnf的修改後,通過systemctl restart mysqld重新啟動mysql服務
4台資料庫主機若要開啟防火牆,要麼關閉防火牆或者建立通路規則:
firewall-cmd --permanent --add-port=3306/tcp
firewall-cmd --reload
主從配置(master1和master2配置成主主,slave1和slave2配置成master1的從):
在master1上授權:
mysql> grant replication slave on *.* to rep@'192.168.31.%' identified by '123456';
在master2上授權:
mysql> grant replication slave on *.* to rep@'192.168.31.%' identified by '123456';
把master2、slave1和slave2配置成master1的從庫:
在master1上執行show master status; 擷取binlog檔案和Position點
mysql> show master status;
+------------------+----------+--------------+------------------+--------------------------------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+---------------------------------------------------+
| mysql-bin.000001 | 452 | | | |
+------------------+----------+--------------+------------------+-----------------------------------------------------+
在master2、slave1和slave2執行
mysql> change master to master_host='192.168.31.83',master_port=3306,master_user='rep',master_password='123456',master_log_file='mysql-bin.000001',master_log_pos=452;
mysql>slave start;
驗證主從複制:
master2主機:
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.31.83
Master_User: rep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 452
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
slave1主機:
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.31.83
Master_User: rep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 452
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
slave2主機:
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.31.83
Master_User: rep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 452
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
如果Slave_IO_Running和Slave_SQL_Running都為yes,那麼主從就已經配置OK了
把master1配置成master2的從庫:
在master2上執行show master status ;擷取binlog檔案和Position點
mysql> show master status;
+------------------+----------+--------------+------------------+--------------------------------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+---------------------------------------------------+
| mysql-bin.000001 | 452 | | | |
+------------------+----------+--------------+------------------+----------------------------------------------------+
在master1上執行:
mysql> change master to master_host='192.168.31.141',master_port=3306,master_user='rep',master_password='123456',master_log_file='mysql-bin.000001',master_log_pos=452;
mysql> start slave;
驗證主從複制:
master1主機:
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.31.141
Master_User: rep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 452
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
如果Slave_IO_Running和Slave_SQL_Running都為yes,那麼主從就已經配置OK了
4、mysql-mmm配置:
在4台mysql節點上建立使用者
建立代理賬号:
mysql> grant super,replicationclient,process on *.* to 'mmm_agent'@'192.168.31.%' identified by '123456';
建立監控賬号:
mysql> grant replication client on *.* to 'mmm_monitor'@'192.168.31.%' identified by '123456';
注1:因為之前的主從複制,以及主從已經是ok的,是以我在master1伺服器執行就ok了。
檢查master2和slave1、slave2三台db上是否都存在監控和代理賬号
mysql> select user,host from mysql.user where user in ('mmm_monitor','mmm_agent');
+-------------+----------------------------+
| user | host |
+-------------+----------------------------+
| mmm_agent | 192.168.31.% |
| mmm_monitor | 192.168.31.% |
+-------------+------------------------------+
或
mysql> show grants for 'mmm_agent'@'192.168.31.%';
+-----------------------------------------------------------------------------------------------------------------------------+
| Grants for [email protected].% |
+-----------------------------------------------------------------------------------------------------------------------------+
| GRANT PROCESS, SUPER, REPLICATION CLIENT ON *.* TO 'mmm_agent'@'192.168.31.%' |
+-----------------------------------------------------------------------------------------------------------------------------+
mysql> show grants for 'mmm_monitor'@'192.168.31.%';
+-----------------------------------------------------------------------------------------------------------------------------+
| Grants for [email protected].% |
+-----------------------------------------------------------------------------------------------------------------------------+
| GRANT REPLICATION CLIENT ON *.* TO 'mmm_monitor'@'192.168.31.%' |
注2:
mmm_monitor使用者:mmm監控用于對mysql伺服器程序健康檢查
mmm_agent使用者:mmm代理用來更改隻讀模式,複制的主伺服器等
5、mysql-mmm安裝
在monitor主機(192.168.31.106) 上安裝監控程式
cd /tmp
wget http://pkgs.fedoraproject.org/repo/pkgs/mysql-mmm/mysql-mmm-2.2.1.tar.gz/f5f8b48bdf89251d3183328f0249461e/mysql-mmm-2.2.1.tar.gz
tar -zxf mysql-mmm-2.2.1.tar.gz
cd mysql-mmm-2.2.1
make install
在資料庫伺服器(master1、master2、slave1、slave2)上安裝代理
cd /tmp
wget http://pkgs.fedoraproject.org/repo/pkgs/mysql-mmm/mysql-mmm-2.2.1.tar.gz/f5f8b48bdf89251d3183328f0249461e/mysql-mmm-2.2.1.tar.gz
tar -zxf mysql-mmm-2.2.1.tar.gz
cd mysql-mmm-2.2.1
make install
6、配置mmm
編寫配置檔案,五台主機必須一緻:
完成安裝後,所有的配置檔案都放到了/etc/mysql-mmm/下面。管理伺服器和資料庫伺服器上都要包含一個共同的檔案mmm_common.conf,内容如下:
active_master_rolewriter#積極的master角色的标示,所有的db伺服器要開啟read_only參數,對于writer伺服器監控代理會自動将read_only屬性關閉。
<host default>
cluster_interfaceeno16777736#群集的網絡接口
pid_path /var/run/mmm_agentd.pid#pid路徑
bin_path /usr/lib/mysql-mmm/#可執行檔案路徑
replication_user rep#複制使用者
replication_password 123456#複制使用者密碼
agent_usermmm_agent#代理使用者
agent_password 123456#代理使用者密碼
</host>
<host master1>#master1的host名
ip 192.168.31.83#master1的ip
mode master#角色屬性,master代表是主
peer master2#與master1對等的伺服器的host名,也就是master2的伺服器host名
</host>
<host master2>#和master的概念一樣
ip 192.168.31.141
mode master
peer master1
</host>
<host slave1>#從庫的host名,如果存在多個從庫可以重複一樣的配置
ip 192.168.31.250#從的ip
mode slave#slave的角色屬性代表目前host是從
</host>
<host slave2>#和slave的概念一樣
ip 192.168.31.225
mode slave
</host>
<role writer>#writer角色配置
hosts master1,master2#能進行寫操作的伺服器的host名,如果不想切換寫操作這裡可以隻配置master,這樣也可以避免因為網絡延時而進行write的切換,但是一旦master出現故障那麼目前的MMM就沒有writer了隻有對外的read操作。
ips 192.168.31.2#對外提供的寫操作的虛拟IP
mode exclusive#exclusive代表隻允許存在一個主,也就是隻能提供一個寫的IP
</role>
<role reader>#read角色配置
hosts master2,slave1,slave2#對外提供讀操作的伺服器的host名,當然這裡也可以把master加進來
ips 192.168.31.3, 192.168.31.4, 192.168.31.5#對外提供讀操作的虛拟ip,這三個ip和host不是一一對應的,并且ips也hosts的數目也可以不相同,如果這樣配置的話其中一個hosts會配置設定兩個ip
mode balanced#balanced代表負載均衡
</role>
同時将這個檔案拷貝到其它的伺服器,配置不變
#for host in master1 master2 slave1 slave2 ; do scp /etc/mysql-mmm/mmm_common.conf $host:/etc/mysql-mmm/ ; done
代理檔案配置
編輯 4台mysql節點機上的/etc/mysql-mmm/mmm_agent.conf
在資料庫伺服器上,還有一個mmm_agent.conf需要修改,其内容是:
includemmm_common.conf
this master1
注意:這個配置隻配置db伺服器,監控伺服器不需要配置,this後面的host名改成目前伺服器的主機名。
啟動代理程序
在 /etc/init.d/mysql-mmm-agent的腳本檔案的#!/bin/sh下面,加入如下内容
source /root/.bash_profile
添加成系統服務并設定為自啟動
#chkconfig --add mysql-mmm-agent
#chkconfigmysql-mmm-agent on
#/etc/init.d/mysql-mmm-agent start
注:添加source /root/.bash_profile目的是為了mysql-mmm-agent服務能啟機自啟。
自動啟動和手動啟動的唯一差別,就是激活一個console 。那麼說明在作為服務啟動的時候,可能是由于缺少環境變量
服務啟動失敗,報錯資訊如下:
Daemon bin: '/usr/sbin/mmm_agentd'
Daemon pid: '/var/run/mmm_agentd.pid'
Starting MMM Agent daemon... Can't locate Proc/Daemon.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at /usr/sbin/mmm_agentd line 7.
BEGIN failed--compilation aborted at /usr/sbin/mmm_agentd line 7.
failed
解決方法:
# cpanProc::Daemon
# cpan Log::Log4perl
# /etc/init.d/mysql-mmm-agent start
Daemon bin: '/usr/sbin/mmm_agentd'
Daemon pid: '/var/run/mmm_agentd.pid'
Starting MMM Agent daemon... Ok
# netstat -antp | grep mmm_agentd
tcp 0 0 192.168.31.83:9989 0.0.0.0:* LISTEN 9693/mmm_agentd
配置防火牆
firewall-cmd --permanent --add-port=9989/tcp
firewall-cmd --reload
編輯 monitor主機上的/etc/mysql-mmm/mmm_mon.conf
includemmm_common.conf
<monitor>
ip 127.0.0.1##為了安全性,設定隻在本機監聽,mmm_mond預設監聽9988
pid_path /var/run/mmm_mond.pid
bin_path /usr/lib/mysql-mmm/
status_path/var/lib/misc/mmm_mond.status
ping_ips192.168.31.83,192.168.31.141,192.168.31.250,192.168.31.225#用于測試網絡可用性 IP 位址清單,隻要其中有一個位址 ping 通,就代表網絡正常,這裡不要寫入本機位址
auto_set_online 0#設定自動online的時間,預設是超過60s就将它設定為online,預設是60s,這裡将其設為0就是立即online
</monitor>
<check default>
check_period 5
trap_period 10
timeout 2
#restart_after 10000
max_backlog 86400
</check>
check_period
描述:檢查周期預設為5s
預設值:5s
trap_period
描述:一個節點被檢測不成功的時間持續trap_period秒,就慎重的認為這個節點失敗了。
預設值:10s
timeout
描述:檢查逾時的時間
預設值:2s
restart_after
描述:在完成restart_after次檢查後,重新開機checker程序
預設值:10000
max_backlog
描述:記錄檢查rep_backlog日志的最大次數
預設值:60
<host default>
monitor_usermmm_monitor#監控db伺服器的使用者
monitor_password 123456#監控db伺服器的密碼
</host>
debug 0#debug 0正常模式,1為debug模式
啟動監控程序:
在 /etc/init.d/mysql-mmm-agent的腳本檔案的#!/bin/sh下面,加入如下内容
source /root/.bash_profile
添加成系統服務并設定為自啟動
#chkconfig --add mysql-mmm-monitor
#chkconfigmysql-mmm-monitor on
#/etc/init.d/mysql-mmm-monitor start
啟動報錯:
Starting MMM Monitor daemon: Can not locate Proc/Daemon.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at /usr/sbin/mmm_mond line 11.
BEGIN failed--compilation aborted at /usr/sbin/mmm_mond line 11.
failed
解決方法:安裝下列perl的庫
#cpanProc::Daemon
#cpan Log::Log4perl
[root@monitor1 ~]# /etc/init.d/mysql-mmm-monitor start
Daemon bin: '/usr/sbin/mmm_mond'
Daemon pid: '/var/run/mmm_mond.pid'
Starting MMM Monitor daemon: Ok
[root@monitor1 ~]# netstat -anpt | grep 9988
tcp 0 0 127.0.0.1:9988 0.0.0.0:* LISTEN 8546/mmm_mond
注1:無論是在db端還是在監控端如果有對配置檔案進行修改操作都需要重新開機代理程序和監控程序。
注2:MMM啟動順序:先啟動monitor,再啟動 agent
檢查叢集狀态:
[root@monitor1 ~]# mmm_control show
master1(192.168.31.83) master/ONLINE. Roles: writer(192.168.31.2)
master2(192.168.31.141) master/ONLINE. Roles: reader(192.168.31.5)
slave1(192.168.31.250) slave/ONLINE. Roles: reader(192.168.31.4)
slave2(192.168.31.225) slave/ONLINE. Roles: reader(192.168.31.3)
如果伺服器狀态不是ONLINE,可以用如下指令将伺服器上線,例如:
#mmm_controlset_online主機名
例如:[root@monitor1 ~]#mmm_controlset_onlinemaster1
從上面的顯示可以看到,寫請求的VIP在master1上,所有從節點也都把master1當做主節點。
檢視是否啟用vip
[root@master1 ~]# ipaddr show dev eno16777736
eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP>mtu 1500 qdiscpfifo_fast state UP qlen 1000
link/ether 00:0c:29:6d:2f:82 brdff:ff:ff:ff:ff:ff
inet 192.168.31.83/24 brd 192.168.31.255 scope global eno16777736
valid_lft forever preferred_lft forever
inet 192.168.31.2/32 scope global eno16777736
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe6d:2f82/64 scope link
valid_lft forever preferred_lft forever
[root@master2 ~]# ipaddr show dev eno16777736
eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP>mtu 1500 qdiscpfifo_fast state UP qlen 1000
link/ether 00:0c:29:75:1a:9c brdff:ff:ff:ff:ff:ff
inet 192.168.31.141/24 brd 192.168.31.255 scope global dynamic eno16777736
valid_lft 35850sec preferred_lft 35850sec
inet 192.168.31.5/32 scope global eno16777736
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe75:1a9c/64 scope link
valid_lft forever preferred_lft forever
[root@slave1 ~]# ipaddr show dev eno16777736
eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP>mtu 1500 qdiscpfifo_fast state UP qlen 1000
link/ether 00:0c:29:02:21:19 brdff:ff:ff:ff:ff:ff
inet 192.168.31.250/24 brd 192.168.31.255 scope global dynamic eno16777736
valid_lft 35719sec preferred_lft 35719sec
inet 192.168.31.4/32 scope global eno16777736
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe02:2119/64 scope link
valid_lft forever preferred_lft forever
[root@slave2 ~]# ipaddr show dev eno16777736
eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP>mtu 1500 qdiscpfifo_fast state UP qlen 1000
link/ether 00:0c:29:e2:c7:fa brdff:ff:ff:ff:ff:ff
inet 192.168.31.225/24 brd 192.168.31.255 scope global dynamic eno16777736
valid_lft 35930sec preferred_lft 35930sec
inet 192.168.31.3/32 scope global eno16777736
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fee2:c7fa/64 scope link
valid_lft forever preferred_lft forever
在master2,slave1,slave2主機上檢視主mysql的指向
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.31.83
Master_User: rep
Master_Port: 3306
Connect_Retry: 60
MMM高可用性測試:
伺服器讀寫采有VIP位址進行讀寫,出現故障時VIP會漂移到其它節點,由其它節點提供服務。
首先檢視整個叢集的狀态,可以看到整個叢集狀态正常
[root@monitor1 ~]# mmm_control show
master1(192.168.31.83) master/ONLINE. Roles: writer(192.168.31.2)
master2(192.168.31.141) master/ONLINE. Roles: reader(192.168.31.5)
slave1(192.168.31.250) slave/ONLINE. Roles: reader(192.168.31.4)
slave2(192.168.31.225) slave/ONLINE. Roles: reader(192.168.31.3)
模拟master1當機,手動停止mysql服務,觀察monitor日志,master1的日志如下:
[root@monitor1 ~]# tail -f /var/log/mysql-mmm/mmm_mond.log
2017/01/09 22:02:55 WARN Check 'rep_threads' on 'master1' is in unknown state! Message: UNKNOWN: Connect error (host = 192.168.31.83:3306, user = mmm_monitor)! Can't connect to MySQL server on '192.168.31.83' (111)
2017/01/09 22:02:55 WARN Check 'rep_backlog' on 'master1' is in unknown state! Message: UNKNOWN: Connect error (host = 192.168.31.83:3306, user = mmm_monitor)! Can't connect to MySQL server on '192.168.31.83' (111)
2017/01/09 22:03:05 ERROR Check 'mysql' on 'master1' has failed for 10 seconds! Message: ERROR: Connect error (host = 192.168.31.83:3306, user = mmm_monitor)! Can't connect to MySQL server on '192.168.31.83' (111)
2017/01/09 22:03:07 FATAL State of host 'master1' changed from ONLINE to HARD_OFFLINE (ping: OK, mysql: not OK)
2017/01/09 22:03:07 INFO Removing all roles from host 'master1':
2017/01/09 22:03:07 INFO Removed role 'writer(192.168.31.2)' from host 'master1'
2017/01/09 22:03:07 INFO Orphaned role 'writer(192.168.31.2)' has been assigned to 'master2'
檢視群集的最新狀态
[root@monitor1 ~]# mmm_control show
master1(192.168.31.83) master/HARD_OFFLINE. Roles:
master2(192.168.31.141) master/ONLINE. Roles: reader(192.168.31.5), writer(192.168.31.2)
slave1(192.168.31.250) slave/ONLINE. Roles: reader(192.168.31.4)
slave2(192.168.31.225) slave/ONLINE. Roles: reader(192.168.31.3)
從顯示結果可以看出master1的狀态有ONLINE轉換為HARD_OFFLINE,寫VIP轉移到了master2主機上。
檢查所有的db伺服器群集狀态
[root@monitor1 ~]# mmm_control checks all
master1 ping [last change: 2017/01/09 21:31:47] OK
master1 mysql [last change: 2017/01/09 22:03:07] ERROR: Connect error (host = 192.168.31.83:3306, user = mmm_monitor)! Can't connect to MySQL server on '192.168.31.83' (111)
master1 rep_threads [last change: 2017/01/09 21:31:47] OK
master1 rep_backlog [last change: 2017/01/09 21:31:47] OK: Backlog is null
slave1 ping [last change: 2017/01/09 21:31:47] OK
slave1mysql [last change: 2017/01/09 21:31:47] OK
slave1 rep_threads [last change: 2017/01/09 21:31:47] OK
slave1 rep_backlog [last change: 2017/01/09 21:31:47] OK: Backlog is null
master2 ping [last change: 2017/01/09 21:31:47] OK
master2 mysql [last change: 2017/01/09 21:57:32] OK
master2 rep_threads [last change: 2017/01/09 21:31:47] OK
master2 rep_backlog [last change: 2017/01/09 21:31:47] OK: Backlog is null
slave2 ping [last change: 2017/01/09 21:31:47] OK
slave2mysql [last change: 2017/01/09 21:31:47] OK
slave2 rep_threads [last change: 2017/01/09 21:31:47] OK
slave2 rep_backlog [last change: 2017/01/09 21:31:47] OK: Backlog is null
從上面可以看到master1能ping通,說明隻是服務死掉了。
檢視master2主機的ip位址:
[root@master2 ~]# ipaddr show dev eno16777736
eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP>mtu 1500 qdiscpfifo_fast state UP qlen 1000
link/ether 00:0c:29:75:1a:9c brdff:ff:ff:ff:ff:ff
inet 192.168.31.141/24 brd 192.168.31.255 scope global dynamic eno16777736
valid_lft 35519sec preferred_lft 35519sec
inet 192.168.31.5/32 scope global eno16777736
valid_lft forever preferred_lft forever
inet 192.168.31.2/32 scope global eno16777736
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe75:1a9c/64 scope link
valid_lft forever preferred_lft forever
slave1主機:
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.31.141
Master_User: rep
Master_Port: 3306
slave2主機:
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.31.141
Master_User: rep
Master_Port: 3306
啟動master1主機的mysql服務,觀察monitor日志,master1的日志如下:
[root@monitor1 ~]# tail -f /var/log/mysql-mmm/mmm_mond.log
2017/01/09 22:16:56 INFO Check 'mysql' on 'master1' is ok!
2017/01/09 22:16:56 INFO Check 'rep_backlog' on 'master1' is ok!
2017/01/09 22:16:56 INFO Check 'rep_threads' on 'master1' is ok!
2017/01/09 22:16:59 FATAL State of host 'master1' changed from HARD_OFFLINE to AWAITING_RECOVERY
從上面可以看到master1的狀态由hard_offline改變為awaiting_recovery狀态
用如下指令将伺服器上線:
[root@monitor1 ~]#mmm_controlset_onlinemaster1
檢視群集最新狀态
[root@monitor1 ~]# mmm_control show
master1(192.168.31.83) master/ONLINE. Roles:
master2(192.168.31.141) master/ONLINE. Roles: reader(192.168.31.5), writer(192.168.31.2)
slave1(192.168.31.250) slave/ONLINE. Roles: reader(192.168.31.4)
slave2(192.168.31.225) slave/ONLINE. Roles: reader(192.168.31.3)
可以看到主庫啟動不會接管主,隻到現有的主再次當機。
總結
(1)master2備選主節點當機不影響叢集的狀态,就是移除了master2備選節點的讀狀态。
(2)master1主節點當機,由master2備選主節點接管寫角色,slave1,slave2指向新master2主庫進行複制,slave1,slave2會自動change master到master2.
(3)如果master1主庫當機,master2複制應用又落後于master1時就變成了主可寫狀态,這時的資料主無法保證一緻性。
如果master2,slave1,slave2延遲于master1主,這個時master1當機,slave1,slave2将會等待資料追上db1後,再重新指向新的主node2進行複制操作,這時的資料也無法保證同步的一緻性。
(4)如果采用MMM高可用架構,主,主備選節點機器配置一樣,而且開啟半同步進一步提高安全性或采用MariaDB/mysql5.7進行多線程從複制,提高複制的性能。
附:
1、日志檔案:
日志檔案往往是分析錯誤的關鍵,是以要善于利用日志檔案進行問題分析。
db端:/var/log/mysql-mmm/mmm_agentd.log
監控端:/var/log/mysql-mmm/mmm_mond.log
2、指令檔案:
mmm_agentd:db代理程序的啟動檔案
mmm_mond:監控程序的啟動檔案
mmm_backup:備份檔案
mmm_restore:還原檔案
mmm_control:監控操作指令檔案
db伺服器端隻有mmm_agentd程式,其它的都是在monitor伺服器端。
3、mmm_control用法
mmm_control程式可以用于監控群集狀态、切換writer、設定online\offline操作等。
Valid commands are:
help - show this message #幫助資訊
ping - ping monitor #ping目前的群集是否正常
show - show status #群集線上狀态檢查
checks [<host>|all [<check>|all]] - show checks status#執行監控檢查操作
set_online<host> - set host <host> online #将host設定為online
set_offline<host> - set host <host> offline #将host設定為offline
mode - print current mode. #列印輸出目前的mode
set_active - switch into active mode.
set_manual - switch into manual mode.
set_passive - switch into passive mode.
move_role [--force] <role><host> - move exclusive role <role> to host <host> #移除writer伺服器為指定的host伺服器(Only use --force if you know what you are doing!)
set_ip<ip><host> - set role with ip<ip> to host <host>
檢查所有的db伺服器群集狀态:
[root@monitor1 ~]# mmm_control checks all
檢查項包括:ping、mysql是否正常運作、複制線程是否正常等
檢查群集環境線上狀況:
[root@monitor1 ~]# mmm_control show
對指定的host執行offline操作:
[root@monitor1 ~]# mmm_controlset_offline slave2
對指定的host執行onine操作:
[root@monitor1 ~]# mmm_controlset_online slave2
執行write切換(手動切換):
檢視目前的slave對應的master
[root@slave2 ~]# mysql -uroot -p123456 -e 'show slave status\G;'
mysql: [Warning] Using a password on the command line interface can be insecure.
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.31.141
writer切換,要確定mmm_common.conf檔案中的writer屬性有配置對應的host,否則無法切換
[root@monitor1 ~]# mmm_controlmove_role writer master1
OK: Role 'writer' has been moved from 'master2' to 'master1'. Now you can wait some time and check new roles info!
[root@monitor1 ~]# mmm_control show
master1(192.168.31.83) master/ONLINE. Roles: writer(192.168.31.2)
master2(192.168.31.141) master/ONLINE. Roles: reader(192.168.31.5)
slave1(192.168.31.250) slave/ONLINE. Roles: reader(192.168.31.4)
slave2(192.168.31.225) slave/ONLINE. Roles: reader(192.168.31.3)
save從庫自動切換到了新的master
[root@slave2 ~]# mysql -uroot -p123456 -e 'show slave status\G;'
mysql: [Warning] Using a password on the command line interface can be insecure.
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.31.83
4、其它處理問題
如果不想讓writer從master切換到backup(包括主從的延時也會導緻寫VIP的切換),那麼可以在配置/etc/mysql-mmm/mmm_common.conf時,去掉<role write>中的backup
<role writer>#writer角色配置
hosts master1 #這裡隻配置一個Hosts
ips 192.168.31.2#對外提供的寫操作的虛拟IP
mode exclusive #exclusive代表隻允許存在一個主,也就是隻能提供一個寫的IP
</role>
這樣的話當master1出現故障了writer寫操作不會切換到master2伺服器,并且slave也不會指向新的master,此時目前的MMM之前對外提供寫服務。
5、總結
1.對外提供讀寫的虛拟IP是由monitor程式控制。如果monitor沒有啟動那麼db伺服器不會被配置設定虛拟ip,但是如果已經配置設定好了虛拟ip,當monitor程式關閉了原先配置設定的虛拟ip不會立即關閉外部程式還可以連接配接通路(隻要不重新開機網絡),這樣的好處就是對于monitor的可靠性要求就會低一些,但是如果這個時候其中的某一個db伺服器故障了就無法處理切換,也就是原先的虛拟ip還是維持不變,挂掉的那台DB的虛拟ip會變的不可通路。
2.agent程式受monitor程式的控制處理write切換,從庫切換等操作。如果monitor程序關閉了那麼agent程序就起不到什麼作用,它本身不能處理故障。
3.monitor程式負責監控db伺服器的狀态,包括Mysql資料庫、伺服器是否運作、複制線程是否正常、主從延時等;它還用于控制agent程式處理故障。
4.monitor會每隔幾秒鐘監控db伺服器的狀态,如果db伺服器已經從故障變成了正常,那麼monitor會自動在60s之後将其設定為online狀态(預設是60s可以設為其它的值),有監控端的配置檔案參數“auto_set_online”決定,群集伺服器的狀态有三種分别是:HARD_OFFLINE→AWAITING_RECOVERY→online