天天看點

consul指令行檢視服務_Consul叢集高可用測試consul cluster 模拟故障測試

通過Consul叢集的failover測試,我們可以了解到它的高可用工作機制。

consul節點IP角色consul版本日志路徑consul01192.168.101.11server0.9.3/var/log/consulconsul02192.168.101.12server0.9.3/var/log/consulconsul03192.168.101.13server(leader)0.9.3/var/log/consul

注:我們是驗證consul cluster的高可用性,而非驗證consul自帶的Geo failover(注冊服務的故障轉移功能)。

目前叢集初始狀态(client節點忽略)

[[email protected] consul]# ./consul operator raft list-peersNode ID Address State Voter RaftProtocolconsul02 192.168.101.12:8300 192.168.101.12:8300 follower true 2consul03 192.168.101.13:8300 192.168.101.13:8300 leader true 2consul01 192.168.101.11:8300 192.168.101.11:8300 follower true 2[[email protected] consul]# ./consul membersNode Address Status Type Build Protocol DC Segmentconsul01 192.168.101.11:8301 alive server 0.9.3 2 dc consul02 192.168.101.12:8301 alive server 0.9.3 2 dc consul03 192.168.101.13:8301 alive server 0.9.3 2 dc ...
           

TSOP域的consul為3個server節點叢集,理論上支援最多一個server節點故障,是以我們測試一個server節點故障是否影響叢集

consul cluster 模拟故障測試

§1. 停掉一個follower server節點

此處以consul01節點為例

[[email protected] consul]# systemctl stop consul
           

其它節點日志資訊

[[email protected] ~]# tail -100 /var/log/consul  2019/02/12 02:30:38 [INFO] serf: EventMemberFailed: consul01.dc 192.168.101.11 2019/02/12 02:30:38 [INFO] consul: Handled member-failed event for server "consul01.dc" in area "wan" 2019/02/12 02:30:39 [INFO] serf: EventMemberLeave: consul01 192.168.101.11
           

在其它節點檢視叢集資訊

[[email protected] consul]# ./consul operator raft list-peersNode ID Address State Voter RaftProtocolconsul02 192.168.101.12:8300 192.168.101.12:8300 follower true 2consul03 192.168.101.13:8300 192.168.101.13:8300 leader true 2[[email protected] consul]# ./consul membersNode Address Status Type Build Protocol DC Segmentconsul01 192.168.101.11:8301 left server 0.9.3 2 dc consul02 192.168.101.12:8301 alive server 0.9.3 2 dc consul03 192.168.101.13:8301 alive server 0.9.3 2 dc ...
           

檢視叢集是否正常(查詢注冊服務,若沒有服務可以通過consul api手動建立一個)

[[email protected] consul]# ./consul catalog servicesconsultest-csdemo-v0-snapshottest-zuul-v0-snapshot
           

可以看到,被停止的server節點處于left狀态,但叢集仍正常可用

是以,

server節點,不會影響consul叢集服務```。

恢複此server節點```shell[[email protected] consul]# ./consul operator raft list-peersNode ID Address State Voter RaftProtocolconsul02 192.168.101.12:8300 192.168.101.12:8300 follower true 2consul03 192.168.101.13:8300 192.168.101.13:8300 leader true 2consul01 192.168.101.11:8300 192.168.101.11:8300 follower true 2[[email protected] consul]# ./consul membersNode Address Status Type Build Protocol DC Segmentconsul01 192.168.101.11:8301 alive server 0.9.3 2 dc consul02 192.168.101.12:8301 alive server 0.9.3 2 dc consul03 192.168.101.13:8301 alive server 0.9.3 2 dc ...
           

其它節點已檢測并将此節點加入

[[email protected] ~]# tail -100 /var/log/consul  2019/02/12 02:43:51 [INFO] serf: EventMemberJoin: consul01.dc 192.168.101.11 2019/02/12 02:43:51 [INFO] consul: Handled member-join event for server "consul01.dc" in area "wan" 2019/02/12 02:43:51 [INFO] serf: EventMemberJoin: consul01 192.168.101.11 2019/02/12 02:43:51 [INFO] consul: Adding LAN server consul01 (Addr: tcp/192.168.101.11:8300) (DC: dc)
           

§1. 停掉leader server節點

此處以consul03節點為例

[[email protected] consul]# systemctl stop consul
           

其它follow server節點檢測到leader節點下線,并重新選舉leader

[[email protected] ~]# tail -100 /var/log/consul  2019/02/12 02:48:27 [INFO] serf: EventMemberLeave: consul03.dc 192.168.101.13 2019/02/12 02:48:27 [INFO] consul: Handled member-leave event for server "consul03.dc" in area "wan" 2019/02/12 02:48:28 [INFO] serf: EventMemberLeave: consul03 192.168.101.13 2019/02/12 02:48:28 [INFO] consul: Removing LAN server consul03 (Addr: tcp/192.168.101.13:8300) (DC: dc) 2019/02/12 02:48:37 [WARN] raft: Rejecting vote request from 192.168.101.11:8300 since we have a leader: 192.168.101.13:8300 2019/02/12 02:48:39 [ERR] agent: Coordinate update error: No cluster leader 2019/02/12 02:48:39 [WARN] raft: Heartbeat timeout from "192.168.101.13:8300" reached, starting election 2019/02/12 02:48:39 [INFO] raft: Node at 192.168.101.12:8300 [Candidate] entering Candidate state in term 5 2019/02/12 02:48:43 [ERR] http: Request GET /v1/catalog/services, error: No cluster leader from=127.0.0.1:44370 2019/02/12 02:48:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36548 2019/02/12 02:48:44 [INFO] raft: Node at 192.168.101.12:8300 [Follower] entering Follower state (Leader: "") 2019/02/12 02:48:44 [INFO] consul: New leader elected: consul01
           

可以從日志中看到,consul01已被選為新的leader,檢視叢集資訊

[[email protected] consul]# ./consul operator raft list-peersNode ID Address State Voter RaftProtocolconsul02 192.168.101.12:8300 192.168.101.12:8300 follower true 2consul01 192.168.101.11:8300 192.168.101.11:8300 leader true 2[[email protected] consul]# ./consul membersNode Address Status Type Build Protocol DC Segmentconsul01 192.168.101.11:8301 alive server 0.9.3 2 dc consul02 192.168.101.12:8301 alive server 0.9.3 2 dc consul03 192.168.101.13:8301 left server 0.9.3 2 dc 
           

可以看到,被停止的leader server節點處于left狀态,但叢集仍正常可用,查詢服務驗證

[[email protected] consul]# ./consul catalog servicesconsultest-csdemo-v0-snapshottest-zuul-v0-snapshot
           

是以,

server節點,也不會影響consul叢集服務```。

我們再恢複此節點到consul叢集中```shell[[email protected] consul]# systemctl start consul
           

可以通過它的日志看到它現在變成了follower server

[[email protected] ~]# tail -f /var/log/consul  2019/02/12 03:01:33 [INFO] raft: Node at 192.168.101.13:8300 [Follower] entering Follower state (Leader: "") 2019/02/12 03:01:33 [INFO] serf: Ignoring previous leave in snapshot 2019/02/12 03:01:33 [INFO] agent: Retry join LAN is supported for: aws azure gce softlayer 2019/02/12 03:01:33 [INFO] agent: Joining LAN cluster... 2019/02/12 03:01:33 [INFO] agent: (LAN) joining: [consul01 consul02 consul03]
           

新的leader和follower上也會更新節點資訊

[[email protected] ~]# tail -f /var/log/consul 2019/02/12 03:01:33 [INFO] serf: EventMemberJoin: consul03.dc 192.168.101.13 2019/02/12 03:01:33 [INFO] consul: Handled member-join event for server "consul03.dc" in area "wan" 2019/02/12 03:01:33 [INFO] serf: EventMemberJoin: consul03 192.168.101.13 2019/02/12 03:01:33 [INFO] consul: Adding LAN server consul03 (Addr: tcp/192.168.101.13:8300) (DC: dc) 2019/02/12 03:01:33 [INFO] raft: Updating configuration with AddStaging (192.168.101.13:8300, 192.168.101.13:8300) to [{Suffrage:Voter ID:192.168.101.12:8300 Address:192.168.101.12:8300} {Suffrage:Voter ID:192.168.101.11:8300 Address:192.168.101.11:8300} {Suffrage:Voter ID:192.168.101.13:8300 Address:192.168.101.13:8300}] 2019/02/12 03:01:33 [INFO] raft: Added peer 192.168.101.13:8300, starting replication 2019/02/12 03:01:33 [WARN] raft: AppendEntries to {Voter 192.168.101.13:8300 192.168.101.13:8300} rejected, sending older logs (next: 394016) 2019/02/12 03:01:33 [INFO] consul: member 'consul03' joined, marking health alive 2019/02/12 03:01:33 [INFO] raft: pipelining replication to peer {Voter 192.168.101.13:8300 192.168.101.13:8300}
           

再次檢視叢集資訊

[[email protected] consul]# ./consul operator raft list-peersNode ID Address State Voter RaftProtocolconsul02 192.168.101.12:8300 192.168.101.12:8300 follower true 2consul01 192.168.101.11:8300 192.168.101.11:8300 leader true 2consul03 192.168.101.13:8300 192.168.101.13:8300 follower true 2[[email protected] consul]# ./consul membersNode Address Status Type Build Protocol DC Segmentconsul01 192.168.101.11:8301 alive server 0.9.3 2 dc consul02 192.168.101.12:8301 alive server 0.9.3 2 dc consul03 192.168.101.13:8301 alive server 0.9.3 2 dc 
           

叢集各節點均已正常,此時僅leader角色改變,不影響叢集對外提供服務。

consul指令行檢視服務_Consul叢集高可用測試consul cluster 模拟故障測試