天天看點

分布式服務注冊發現與統一配置管理之 Consul背景Consul 簡介

每天早上七點三十,準時推送幹貨

分布式服務注冊發現與統一配置管理之 Consul背景Consul 簡介

Hello 大家好,我是阿粉,前面的文章給大家介紹過 Nacos,用于服務注冊發現和管理配置的開源元件,今天給大家分享另一個元件 Consul 也有相應的功能,我們一起來看一下吧!

背景

目前分布式系統架構已經基本普及,很多項目都是基于分布式架構的,以往的單機模式基本已經不适應當下網際網路行業的發展。随着分布式項目的普及,項目服務執行個體數目的增加,服務的注冊與發現功能就成了一項必不可少的架構。服務的注冊與發現的功能,有很多開源方案。包括早期的zookeeper,百度的disconf,阿裡的diamond,基于Go語言的ETCD,Spring內建的Eureka,以及前文提到的 Nacos 還有本文的主角Consul。這裡不對上面提到的進行比較,本文僅介紹Consul,詳細的對比,說明網上有很多資料,可以參考,例如:服務發現比較:Consul vs Zookeeper vs Etcd vs Eureka說到服務的注冊與發現主要是下面兩個主要功能:

  1. 服務注冊與發現
  2. 配置中心即分布式項目統一配置管理

Consul 簡介

  1. Consul采用Go語言開發
  2. Consul内置服務注冊與發現架構、分布一緻性協定實作、健康檢查、Key/Value存儲、多資料中心方案;不依賴其他工具,安裝包包含一個可執行檔案
  3. 支援DNS、HTTP協定接口
  4. 自帶web-ui
  5. Consul是 HashiCorp 公司推出的開源工具,用于實作分布式系統的服務發現與配置。
  6. Consul支援兩種服務注冊的方式,一種是通過Consul的服務注冊HTTP API,由服務自身在啟動後調用API注冊自己,另外一種則是通過在配置檔案中定義服務的方式進行注冊。Consul文檔中建議使用後面一種方式來做服務 配置和服務注冊。

Consul  服務端配置使用

  1. 下載下傳相應版本解壓,并将可執行檔案複制到/usr/local/consul目錄下
  2. 建立一個service的配置檔案
    silence$ sudo mkdir /etc/consul.d
    silence$ echo '{"service":{"name": "web", "tags": ["rails"], "port": 80}}' | sudo tee /etc/consul.d/web.json
               
  3. 啟動代理
    silence$ /usr/local/consul/consul agent -dev -node consul_01 -config-dir=/etc/consul.d/ -ui
               
    -dev 參數代表本地測試環境啟動;-node 參數表示自定義叢集名稱;-config-drir 參數表示services的注冊配置檔案目錄,即上面建立的檔案夾-ui 啟動自帶的web-ui管理頁面
  4. 叢集成員查詢方式
    silence-pro:~ silence$ /usr/local/consul/consul members
               
  5. HTTP協定資料查詢
    silence-pro:~ silence$ curl http://127.0.0.1:8500/v1/catalog/service/web
    [
        {
            "ID": "ab1e3577-1b24-d254-f55e-9e8437956009",
            "Node": "consul_01",
            "Address": "127.0.0.1",
            "Datacenter": "dc1",
            "TaggedAddresses": {
                "lan": "127.0.0.1",
                "wan": "127.0.0.1"
            },
            "NodeMeta": {
                "consul-network-segment": ""
            },
            "ServiceID": "web",
            "ServiceName": "web",
            "ServiceTags": [
                "rails"
    ],
            "ServiceAddress": "",
            "ServicePort": 80,
            "ServiceEnableTagOverride": false,
            "CreateIndex": 6,
            "ModifyIndex": 6
        }
    ]
    silence-pro:~ silence$
               
  6. web-ui管理

Consul Web UI

分布式服務注冊發現與統一配置管理之 Consul背景Consul 簡介
Consul的web-ui可以用來進行服務狀态的檢視,叢集節點的檢查,通路清單的控制以及KV存儲系統的設定,相對于Eureka和ETCD,Consul的web-ui要好用的多。(Eureka和ETCD将在下一篇文章中簡單介紹。)

7. KV存儲的資料導入和導出

silence-pro:consul silence$ ./consul kv import @temp.json


silence-pro:consul silence$ ./consul kv export redis/
           

temp.json檔案内容格式如下,一般是管理頁面配置後先導出儲存檔案,以後需要再導入該檔案

[
    {
        "key": "redis/config/password",
        "flags": 0,
        "value": "MTIzNDU2"
    },
    {
        "key": "redis/config/username",
        "flags": 0,
        "value": "U2lsZW5jZQ=="
    },
    {
        "key": "redis/zk/",
        "flags": 0,
        "value": ""
    },
    {
        "key": "redis/zk/password",
        "flags": 0,
        "value": "NDU0NjU="
    },
    {
        "key": "redis/zk/username",
        "flags": 0,
        "value": "ZGZhZHNm"
    }
]
           
Consul的KV存儲系統是一種類似zk的樹形節點結構,用來存儲相關key/value鍵值對資訊的,我們可以使用KV存儲系統來實作上面提到的配置中心,将統一的配置資訊儲存在KV存儲系統裡面,友善各個執行個體擷取并使用同一配置。而且更改配置後各個服務可以自動拉取最新配置,不需要重新開機服務。

Consul Java 用戶端使用

  1. maven pom依賴增加,版本可自由更換
    <dependency>
        <groupId>com.orbitz.consul</groupId>
        <artifactId>consul-client</artifactId>
        <version>0.12.3</version>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-actuator</artifactId>
    </dependency>
               
  2. Consul 基本工具類,根據需要相應擴充
    package com.coocaa.consul.consul.demo;
    
    
    import com.google.common.base.Optional;
    import com.google.common.net.HostAndPort;
    import com.orbitz.consul.*;
    import com.orbitz.consul.model.agent.ImmutableRegCheck;
    import com.orbitz.consul.model.agent.ImmutableRegistration;
    import com.orbitz.consul.model.health.ServiceHealth;
    
    
    import java.net.MalformedURLException;
    import java.net.URI;
    import java.util.List;
    
    
    public class ConsulUtil {
    
    
        private static Consul consul = Consul.builder().withHostAndPort(HostAndPort.fromString("127.0.0.1:8500")).build();
    
    
        /**
    
    
         * 服務注冊
    
    
         */
        public static void serviceRegister() {
            AgentClient agent = consul.agentClient();
    
    
            try {
                /**
    
    
                 * 注意該注冊接口:
                 * 需要提供一個健康檢查的服務URL,以及每隔多長時間通路一下該服務(這裡是3s)
    
    
                 */
                agent.register(8080, URI.create("http://localhost:8080/health").toURL(), 3, "tomcat", "tomcatID", "dev");
    
    
            } catch (MalformedURLException e) {
                e.printStackTrace();
            }
    
    
        }
    
    
        /**
    
    
         * 服務擷取
    
    
         *
    
    
         * @param serviceName
    
    
         */
        public static void findHealthyService(String serviceName) {
            HealthClient healthClient = consul.healthClient();
            List<ServiceHealth> serviceHealthList = healthClient.getHealthyServiceInstances(serviceName).getResponse();
            serviceHealthList.forEach((response) -> {
                System.out.println(response);
            });
        }
    
    
        /**
    
    
         * 存儲KV
    
    
         */
        public static void storeKV(String key, String value) {
            KeyValueClient kvClient = consul.keyValueClient();
            kvClient.putValue(key, value);
        }
    
    
        /**
    
    
         * 根據key擷取value
    
    
         */
        public static String getKV(String key) {
            KeyValueClient kvClient = consul.keyValueClient();
            Optional<String> value = kvClient.getValueAsString(key);
            if (value.isPresent()) {
                return value.get();
            }
            return "";
        }
    
    
        /**
    
    
         * 找出一緻性的節點(應該是同一個DC中的所有server節點)
    
    
         */
        public static List<String> findRaftPeers() {
            StatusClient statusClient = consul.statusClient();
            return statusClient.getPeers();
        }
    
    
        /**
    
    
         * 擷取leader
    
    
         */
        public static String findRaftLeader() {
            StatusClient statusClient = consul.statusClient();
            return statusClient.getLeader();
        }
    
    
        public static void main(String[] args) {
            AgentClient agentClient = consul.agentClient();
            agentClient.deregister("tomcatID");
        }
    }
               
    temp.json 和 ConsulUtil.java 檔案以及上傳到 GitHub 資源庫,回複【源碼倉庫】擷取代碼位址。
  3. 通過上面的基本工具類可以實作服務的注冊和KV資料的擷取與存儲功能

Consul叢集搭建

參考位址 Consul叢集搭建 http://ju.outofmemory.cn/entry/189899

  1. 三台主機Consul下載下傳安裝,我這裡沒有實體主機,是以通過三台虛拟機來實作。虛拟機IP分192.168.231.145,192.168.231.146,192.168.231.147
  2. 将145和146兩台主機作為Server模式啟動,147作為Client模式啟動,Server和Client隻是針對Consul叢集來說的,跟服務沒有任何關系!
  3. Server模式啟動145,節點名稱設為n1,資料中心統一用dc1
    [[email protected] consul]# ./consul agent -server -bootstrap-expect 2 -data-dir /tmp/consul -node=n1 -bind=192.168.231.145 -datacenter=dc1
    bootstrap_expect = 2: A cluster with 2 servers will provide no failure tolerance. See https://www.consul.io/docs/internals/consensus.html#deployment-table
    bootstrap_expect > 0: expecting 2 servers
    ==> Starting Consul agent...
    ==> Consul agent running!
               Version: 'v1.0.1'
               Node ID: '6cc74ff7-7026-cbaa-5451-61f02114cd25'
             Node name: 'n1'
            Datacenter: 'dc1' (Segment: '<all>')
                Server: true (Bootstrap: false)
           Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, DNS: 8600)
          Cluster Addr: 192.168.231.145 (LAN: 8301, WAN: 8302)
               Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
    
    
    ==> Log data will now stream in as it occurs:
    
    
        2017/12/06 23:26:21 [INFO] raft: Initial configuration (index=0): []
        2017/12/06 23:26:21 [INFO] serf: EventMemberJoin: n1.dc1 192.168.231.145
        2017/12/06 23:26:21 [INFO] serf: EventMemberJoin: n1 192.168.231.145
        2017/12/06 23:26:21 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
        2017/12/06 23:26:21 [INFO] raft: Node at 192.168.231.145:8300 [Follower] entering Follower state (Leader: "")
        2017/12/06 23:26:21 [INFO] consul: Adding LAN server n1 (Addr: tcp/192.168.231.145:8300) (DC: dc1)
        2017/12/06 23:26:21 [INFO] consul: Handled member-join event for server "n1.dc1" in area "wan"
        2017/12/06 23:26:21 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
        2017/12/06 23:26:21 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp)
        2017/12/06 23:26:21 [INFO] agent: started state syncer
        2017/12/06 23:26:28 [ERR] agent: failed to sync remote state: No cluster leader
        2017/12/06 23:26:30 [WARN] raft: no known peers, aborting election
        2017/12/06 23:26:49 [ERR] agent: Coordinate update error: No cluster leader
        2017/12/06 23:26:54 [ERR] agent: failed to sync remote state: No cluster leader
        2017/12/06 23:27:24 [ERR] agent: Coordinate update error: No cluster leader
        2017/12/06 23:27:27 [ERR] agent: failed to sync remote state: No cluster leader
        2017/12/06 23:27:56 [ERR] agent: Coordinate update error: No cluster leader
        2017/12/06 23:28:02 [ERR] agent: failed to sync remote state: No cluster leader
        2017/12/06 23:28:27 [ERR] agent: failed to sync remote state: No cluster leader
        2017/12/06 23:28:33 [ERR] agent: Coordinate update error: No cluster leader
               
    目前隻啟動了145,是以還沒有叢集
  4. Server模式啟動146,節點名稱用n2,并在n2上啟用了web-ui管理頁面功能
    [[email protected] consul]# ./consul agent -server -bootstrap-expect 2 -data-dir /tmp/consul -node=n2 -bind=192.168.231.146 -datacenter=dc1 -ui
    bootstrap_expect = 2: A cluster with 2 servers will provide no failure tolerance. See https://www.consul.io/docs/internals/consensus.html#deployment-table
    bootstrap_expect > 0: expecting 2 servers
    ==> Starting Consul agent...
    ==> Consul agent running!
               Version: 'v1.0.1'
               Node ID: 'eb083280-c403-668f-e193-60805c7c856a'
             Node name: 'n2'
            Datacenter: 'dc1' (Segment: '<all>')
                Server: true (Bootstrap: false)
           Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, DNS: 8600)
          Cluster Addr: 192.168.231.146 (LAN: 8301, WAN: 8302)
               Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
    
    
    ==> Log data will now stream in as it occurs:
    
    
        2017/12/06 23:28:30 [INFO] raft: Initial configuration (index=0): []
        2017/12/06 23:28:30 [INFO] serf: EventMemberJoin: n2.dc1 192.168.231.146
        2017/12/06 23:28:31 [INFO] serf: EventMemberJoin: n2 192.168.231.146
        2017/12/06 23:28:31 [INFO] raft: Node at 192.168.231.146:8300 [Follower] entering Follower state (Leader: "")
        2017/12/06 23:28:31 [INFO] consul: Adding LAN server n2 (Addr: tcp/192.168.231.146:8300) (DC: dc1)
        2017/12/06 23:28:31 [INFO] consul: Handled member-join event for server "n2.dc1" in area "wan"
        2017/12/06 23:28:31 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
        2017/12/06 23:28:31 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
        2017/12/06 23:28:31 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp)
        2017/12/06 23:28:31 [INFO] agent: started state syncer
        2017/12/06 23:28:38 [ERR] agent: failed to sync remote state: No cluster leader
        2017/12/06 23:28:39 [WARN] raft: no known peers, aborting election
        2017/12/06 23:28:57 [ERR] agent: Coordinate update error: No cluster leader
        2017/12/06 23:29:11 [ERR] agent: failed to sync remote state: No cluster leader
        2017/12/06 23:29:30 [ERR] agent: Coordinate update error: No cluster leader
        2017/12/06 23:29:38 [ERR] agent: failed to sync remote state: No cluster leader
        2017/12/06 23:29:57 [ERR] agent: Coordinate update error: No cluster leader
               
    同樣沒有叢集發現,此時n1和n2都啟動起來,但是互相并不知道叢集的存在!
  5. 将n1節點加入n2
    [[email protected] consul]$ ./consul join 192.168.231.146
               
    此時n1和n2都列印發現了叢集的日志資訊
  6. 這個時候n1和n2兩個節點已經是一個叢集裡面的Server模式的節點了
  7. Client模式啟動147
    [[email protected] consul]# ./consul agent -data-dir /tmp/consul -node=n3 -bind=192.168.231.147 -datacenter=dc1
    ==> Starting Consul agent...
    ==> Consul agent running!
               Version: 'v1.0.1'
               Node ID: 'be7132c3-643e-e5a2-9c34-cad99063a30e'
             Node name: 'n3'
            Datacenter: 'dc1' (Segment: '')
                Server: false (Bootstrap: false)
           Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, DNS: 8600)
          Cluster Addr: 192.168.231.147 (LAN: 8301, WAN: 8302)
               Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
    
    
    ==> Log data will now stream in as it occurs:
    
    
        2017/12/06 23:36:46 [INFO] serf: EventMemberJoin: n3 192.168.231.147
        2017/12/06 23:36:46 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
        2017/12/06 23:36:46 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
        2017/12/06 23:36:46 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp)
        2017/12/06 23:36:46 [INFO] agent: started state syncer
        2017/12/06 23:36:46 [WARN] manager: No servers available
        2017/12/06 23:36:46 [ERR] agent: failed to sync remote state: No known Consul servers
        2017/12/06 23:37:08 [WARN] manager: No servers available
        2017/12/06 23:37:08 [ERR] agent: failed to sync remote state: No known Consul servers
        2017/12/06 23:37:36 [WARN] manager: No servers available
        2017/12/06 23:37:36 [ERR] agent: failed to sync remote state: No known Consul servers
        2017/12/06 23:38:02 [WARN] manager: No servers available
        2017/12/06 23:38:02 [ERR] agent: failed to sync remote state: No known Consul servers
        2017/12/06 23:38:22 [WARN] manager: No servers available
        2017/12/06 23:38:22 [ERR] agent: failed to sync remote state: No known Consul servers
               
  8. 在n3上面将節點n3加入叢集
    [[email protected] consul]$ ./consul join 192.168.231.145
               
  9. 再次檢視叢集節點資訊
  10. 此時三個節點的Consul叢集搭建成功了!其實n1和n2是Server模式啟動,n3是Client模式啟動。
  11. 關于Consul的Server模式和Client模式主要的差別是這樣的,一個Consul叢集通過啟動的參數

    -bootstrap-expect

    來控制這個叢集的Server節點個數,Server模式的節點會維護叢集的狀态,并且如果某個Server節點退出了叢集,則會觸發Leader重新選舉機制,在會剩餘的Server模式節點中重新選舉一個Leader;而Client模式的節點的加入和退出很自由。
  12. 在n2中啟動web-ui
    分布式服務注冊發現與統一配置管理之 Consul背景Consul 簡介

最後說兩句(求關注)

最近大家應該發現微信公衆号資訊流改版了吧,再也不是按照時間順序展示了。這就對阿粉這樣的堅持的原創小号主,可以說非常打擊,閱讀量直線下降,正回報持續減弱。

是以看完文章,哥哥姐姐們給阿粉來個在看吧,讓阿粉擁有更加大的動力,寫出更好的文章,拒絕白嫖,來點正回報呗~。

如果想在第一時間收到阿粉的文章,不被公号的資訊流影響,那麼可以給Java極客技術設為一個星标。

分布式服務注冊發現與統一配置管理之 Consul背景Consul 簡介

最後感謝各位的閱讀,才疏學淺,難免存在纰漏,如果你發現錯誤的地方,由于本号沒有留言功能,還請你在背景留言指出,我對其加以修改。

最後謝謝大家支援~

最最後,重要的事再說一篇~

快來關注我呀~

快來關注我呀~

快來關注我呀~

< END >

如果大家喜歡我們的文章,歡迎大家轉發,點選在看讓更多的人看到。也歡迎大家熱愛技術和學習的朋友加入的我們的知識星球當中,我們共同成長,進步。

分布式服務注冊發現與統一配置管理之 Consul背景Consul 簡介

往期精彩回顧

SpringBoot2.x 整合 shiro 權限架構

什麼是BIO,NIO?他們和多路複用器有啥關系?

你應該知道的 Nacos 接入和避坑指南