啟動Consul
安裝配置好Consul以後,我們可以通過簡單的指令啟動consul。先來看最簡單的啟動方式:
consul agent -dev
在新終端中,執行如上操作。
[email protected]:~$ consul agent -dev
==> Starting Consul agent...
==> Consul agent running!
Version: 'v1.5.1'
Node ID: '808644da-c526-efa2-4f37-fff96168dcd1'
Node name: 'localhost'
Datacenter: 'dc1' (Segment: '')
Server: true (Bootstrap: false)
Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, gRPC: 8502, DNS: 8600)
Cluster Addr: 127.0.0.1 (LAN: 8301, WAN: 8302)
Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
==> Log data will now stream in as it occurs:
2019/06/17 15:36:18 [DEBUG] agent: Using random ID "808644da-c526-efa2-4f37-fff96168dcd1" as node ID
2019/06/17 15:36:18 [DEBUG] tlsutil: Update with version 1
2019/06/17 15:36:18 [DEBUG] tlsutil: OutgoingRPCWrapper with version 1
2019/06/17 15:36:18 [DEBUG] tlsutil: IncomingRPCConfig with version 1
2019/06/17 15:36:18 [DEBUG] tlsutil: OutgoingRPCWrapper with version 1
2019/06/17 15:36:18 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:808644da-c526-efa2-4f37-fff96168dcd1 Address:127.0.0.1:8300}]
2019/06/17 15:36:18 [INFO] raft: Node at 127.0.0.1:8300 [Follower] entering Follower state (Leader: "")
2019/06/17 15:36:18 [INFO] serf: EventMemberJoin: localhost.dc1 127.0.0.1
2019/06/17 15:36:18 [INFO] serf: EventMemberJoin: localhost 127.0.0.1
2019/06/17 15:36:18 [INFO] consul: Handled member-join event for server "localhost.dc1" in area "wan"
2019/06/17 15:36:18 [INFO] consul: Adding LAN server localhost (Addr: tcp/127.0.0.1:8300) (DC: dc1)
2019/06/17 15:36:18 [DEBUG] agent/proxy: managed Connect proxy manager started
2019/06/17 15:36:18 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
2019/06/17 15:36:18 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
2019/06/17 15:36:18 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp)
2019/06/17 15:36:18 [INFO] agent: started state syncer
2019/06/17 15:36:18 [INFO] agent: Started gRPC server on 127.0.0.1:8502 (tcp)
2019/06/17 15:36:18 [WARN] raft: Heartbeat timeout from "" reached, starting election
2019/06/17 15:36:18 [INFO] raft: Node at 127.0.0.1:8300 [Candidate] entering Candidate state in term 2
2019/06/17 15:36:18 [DEBUG] raft: Votes needed: 1
2019/06/17 15:36:18 [DEBUG] raft: Vote granted from 808644da-c526-efa2-4f37-fff96168dcd1 in term 2. Tally: 1
2019/06/17 15:36:18 [INFO] raft: Election won. Tally: 1
2019/06/17 15:36:18 [INFO] raft: Node at 127.0.0.1:8300 [Leader] entering Leader state
2019/06/17 15:36:18 [INFO] consul: cluster leadership acquired
2019/06/17 15:36:18 [INFO] consul: New leader elected: localhost
2019/06/17 15:36:18 [INFO] connect: initialized primary datacenter CA with provider "consul"
2019/06/17 15:36:18 [DEBUG] consul: Skipping self join check for "localhost" since the cluster is too small
2019/06/17 15:36:18 [INFO] consul: member 'localhost' joined, marking health alive
2019/06/17 15:36:18 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
2019/06/17 15:36:18 [INFO] agent: Synced node info
2019/06/17 15:36:18 [DEBUG] agent: Node info in sync
2019/06/17 15:36:18 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
2019/06/17 15:36:18 [DEBUG] agent: Node info in sync
2019/06/17 15:37:18 [DEBUG] consul: Skipping self join check for "localhost" since the cluster is too small
2019/06/17 15:37:49 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
2019/06/17 15:37:49 [DEBUG] agent: Node info in sync
2019/06/17 15:38:18 [DEBUG] manager: Rebalanced 1 servers, next active server is localhost.dc1 (Addr: tcp/127.0.0.1:8300) (DC: dc1)
2019/06/17 15:38:18 [DEBUG] consul: Skipping self join check for "localhost" since the cluster is too small
上面貼出了Consul啟動的輸出日志,對日志做如下分析和說明:
-dev:dev是consul多種啟動模式的一種,dev是development的縮寫,代表的是開發模式,該種啟動模式僅僅是為了快速便捷的啟動單節點consul,比如目前環境。
Consul agent running!:表示該consul節點正常運作起來。
Datacenter:'dc1' 表示目前節點所屬的資料中心的名稱為dc1。
Server:true(bootstrap:false) 表示該節點屬于Server角色。Consul節點統稱為agent,有兩類:Client、Server。
raft: Heartbeat timeout from "" reached, starting election Raft算法開始進行Leader節點選舉。
consul: cluster leadership acquired、consul: New leader elected: localhost Leader節點選舉結束,本地唯一的節點被選舉為leader節點。
consul: member 'localhost' joined, marking health alive 目前localhost節點是一個健康正常的節點
檢視consul節點資訊
在consul啟動後,可以通過指令檢視節點的資訊。在原有已經啟動consul的終端視窗之外,重新開啟新的終端視窗,執行如下指令:
consul members
Node Address Status Type Build Protocol DC Segment localhost 127.0.0.1:8301 alive server 1.5.1 2 dc1
輸出日志說明:
Address:節點位址
Status:alive表示節點健康運作
Type:節點的類型,有兩種:server、client
DC:Datacenter的縮寫,dc1表示該節點屬于Datacenter1
UI界面通路
終端指令行下啟動consul的dev模式後,通過members指令檢視節點資訊,除此以外,還可以使用Http的浏覽器通路的模式,檢視節點資訊。consul啟動,正常運作後,打開浏覽器,在位址欄中鍵入:。可以檢視節點資訊,如下圖:
停止服務
在節點運作終端中執行:ctrl + c,表示退出節點運作。
2019/06/17 16:21:43 [INFO] agent: Caught signal: interrupt
2019/06/17 16:21:43 [INFO] agent: Graceful shutdown disabled. Exiting
2019/06/17 16:21:43 [INFO] agent: Requesting shutdown
2019/06/17 16:21:43 [WARN] agent: dev mode disabled persistence, killing all proxies since we can't recover them
2019/06/17 16:21:43 [DEBUG] agent/proxy: Stopping managed Connect proxy manager
2019/06/17 16:21:43 [INFO] consul: shutting down server
2019/06/17 16:21:43 [WARN] serf: Shutdown without a Leave
2019/06/17 16:21:43 [WARN] serf: Shutdown without a Leave
2019/06/17 16:21:43 [INFO] manager: shutting down
2019/06/17 16:21:43 [INFO] agent: consul server down
2019/06/17 16:21:43 [INFO] agent: shutdown complete
2019/06/17 16:21:43 [INFO] agent: Stopping DNS server 127.0.0.1:8600 (tcp)
2019/06/17 16:21:43 [INFO] agent: Stopping DNS server 127.0.0.1:8600 (udp)
2019/06/17 16:21:43 [INFO] agent: Stopping HTTP server 127.0.0.1:8500 (tcp)
2019/06/17 16:21:43 [INFO] agent: Waiting for endpoints to shut down
2019/06/17 16:21:43 [INFO] agent: Endpoints down
2019/06/17 16:21:43 [INFO] agent: Exit code: 1
退出節點運作。
consul dev模式示意圖
上訴consul agent -dev模式下的啟動與運作consul節點。叢集中隻包含一個節點,唯一的節點被選舉成為Leader節點。