天天看點

【微服務之Consul 服務注冊發現】

Consul 是一套開源的分布式服務發現和配置管理系統,由 HashiCorp 公司用 Go 語言開發。

Service Discovery and Configuration Made Easy

Consul is a distributed, highly available system. This section will cover the basics, purposely omitting some unnecessary detail, so you can get a quick understanding of how Consul works. For more detail, please refer to the in-depth architecture overview.

Every node that provides services to Consul runs a Consul agent. Running an agent is not required for discovering other services or getting/setting key/value data. The agent is responsible for health checking the services on the node as well as the node itself.

The agents talk to one or more Consul servers. The Consul servers are where data is stored and replicated. The servers themselves elect a leader. While Consul can function with one server, 3 to 5 is recommended to avoid failure scenarios leading to data loss. A cluster of Consul servers is recommended for each datacenter.

Components of your infrastructure that need to discover other services or nodes can query any of the Consul servers or any of the Consul agents. The agents forward queries to the servers automatically.

Each datacenter runs a cluster of Consul servers. When a cross-datacenter service discovery or configuration request is made, the local Consul servers forward the request to the remote datacenter and return the result.

它具有很多優點。包括: 基于 raft 協定,比較簡潔; 支援健康檢查, 同時支援 HTTP 和 DNS 協定 支援跨資料中心的 WAN 叢集 提供圖形界面 跨平台,支援 Linux、Mac、Windows

Service Discovery

HashiCorp Consul makes it simple for services to register themselves and to discover other services via a DNS or HTTP interface. Register external services such as SaaS providers as well.

Failure Detection

Pairing service discovery with health checking prevents routing requests to unhealthy hosts and enables services to easily provide circuit breakers.

Multi Datacenter

Consul scales to multiple datacenters out of the box with no complicated configuration. Look up services in other datacenters, or keep the request local.

KV Storage

Flexible key/value store for dynamic configuration, feature flagging, coordination, leader election and more. Long poll for near-instant notification of configuration changes.

服務發現的架構常用的有

zookeeper

eureka

etcd

consul

基本概念

agent

組成 consul 叢集的每個成員上都要運作一個 agent,可以通過 consul agent 指令來啟動。agent 可以運作在 server 狀态或者 client 狀态。自然的,運作在 server 狀态的節點被稱為 server 節點;運作在 client 狀态的節點被稱為 client 節點。

client 節點

負責轉發所有的 RPC 到 server 節點。本身無狀态,且輕量級,是以,可以部署大量的 client 節點。

server 節點

負責組成 cluster 的複雜工作(選舉、狀态維護、轉發請求到 lead),以及 consul 提供的服務(響應 RCP 請求)。考慮到容錯和收斂,一般部署 3 ~ 5 個比較合适。

Gossip

基于 Serf 實作的 gossip 協定,負責成員、失敗探測、事件廣播等。通過 UDP 實作各個節點之間的消息。分為 LAN 上的和 WAN 上的兩種情形。

Consul 的使用場景

1)docker 執行個體的注冊與配置共享

2)coreos 執行個體的注冊與配置共享

3)vitess 叢集

4)SaaS 應用的配置共享

5)與 confd 服務內建,動态生成 nginx 和 haproxy 配置檔案

Consul 的優勢

使用 Raft 算法來保證一緻性, 比複雜的 Paxos 算法更直接. 相比較而言, zookeeper 采用的是 Paxos, 而 etcd 使用的則是 Raft.

支援多資料中心,内外網的服務采用不同的端口進行監聽。 多資料中心叢集可以避免單資料中心的單點故障,而其部署則需要考慮網絡延遲, 分片等情況等. zookeeper 和 etcd 均不提供多資料中心功能的支援.

支援健康檢查. etcd 不提供此功能.

支援 http 和 dns 協定接口. zookeeper 的內建較為複雜, etcd 隻支援 http 協定.

官方提供web管理界面, etcd 無此功能.

繼續閱讀