天天看點

elasticsearch權限驗證(Auth+Transport SSL)

elasticsearch權限驗證(Auth+Transport SSL)

  在新版的Elastic中,基礎版(免費)的已經提供了基礎的核心安全功能,可以在生産中部署,不再需要Nginx + Basic Auth代理了。

預設情況下Elastic中的安全功能是被禁用的,那麼在本文中,就是采用基礎版,自動申請Basic License的,然後分别開啟Auth認證,以及Nodes間加密通信SSL。

下載下傳:

$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.4.2-linux-x86_64.tar.gz
$ tar xf elasticsearch-7.4.2-linux-x86_64.tar.gz      

單機測試服務:

$  cd elasticsearch-7.4.2
$ ./bin/elasticsearch      

此時預設是以development 方式啟動的,一些前提條件如果不符合其要求隻會提示,但并不會無法啟動,此時隻會監聽在​

​127.0.0.1:9200​

​​上,隻能用于測試;當你更改了``elasticsearch.yml​

​配置檔案中的​

​network.host`參數時,就會以生産的方式啟動。

我們這采用生産的方式,也就是說他的前提依賴都必須滿足,否則無法啟動。

目錄結構:

Type Description Default Location Setting
home Elasticsearch home directory or ​

​$ES_HOME​

Directory created by unpacking the archive

​ES_ HOME​

bin Binary scripts including ​

​elasticsearch​

​​ to start a node and ​

​elasticsearch-plugin​

​ to install plugins

​$ES_HOME/bin​

conf Configuration files including ​

​elasticsearch.yml​

​$ES_HOME/config​

​ES_PATH_CONF​

data The location of the data files of each index / shard allocated on the node. Can hold multiple locations.

​$ES_HOME/data​

​path.data​

logs Log files location.

​$ES_HOME/logs​

​path.logs​

plugins Plugin files location. Each plugin will be contained in a subdirectory.

​$ES_HOME/plugins​

repo Shared file system repository locations. Can hold multiple locations. A file system repository can be placed in to any subdirectory of any directory specified here. Not configured

​path.repo​

script Location of script files.

​$ES_HOME/scripts​

​path.scripts​

系統設定:

ulimits

編輯配置檔案​

​/etc/security/limits.conf​

​​,因為我這裡使用預設的使用者​

​ec2-user​

​​來運作ES,是以這裡的賬号填​

​ec2-user​

​,你可以根據自己的情況填寫,或者寫成星号;

# - nofile - max number of open file descriptors 最大打開的檔案描述符數
# - memlock - max locked-in-memory address space (KB) 最大記憶體鎖定
# - nproc - max number of processes 最大程序數
$ vim /etc/security/limits.conf
ec2-user  -  nofile  65535
ec2-user  -  memlock  unlimited
ec2-user  -  nproc  4096

# 然後退出重新登陸      

  

$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 63465
max locked memory       (kbytes, -l) unlimited ## 這裡已經生效
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65535 ## 這裡已經生效
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 4096 ## 這裡已經生效
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited      

禁用交換分區 swap

執行指令以立刻禁用swap:

$ sudo swapoff -a      

這裡隻是臨時的禁用了,系統重新開機後還是會啟動的,編輯以下配置檔案将swap的挂載去掉:

$ sudo vim /etc/fstab      

  

配置swappiness 以及虛拟記憶體

這是減少了核心的交換趨勢,并且在正常情況下不應該導緻交換,同時仍然允許整個系統在緊急情況下交換。

# 增加如下兩行
$ sudo vim /etc/sysctl.conf
vm.swappiness=1
vm.max_map_count=262144

# 使之生效
$ sudo sysctl -p      

開啟ES的記憶體鎖定:

在ES的配置檔案中​

​config/elasticsearch.yml​

​增加如下行:

bootstrap.memory_lock: true      

Elasticsearch 基礎概念

Cluster

Elasticsearch 叢集,由一台或多台的Elasticsearch 節點(Node)組成。

Node

Elasticsearch 節點,可以認為是Elasticsearch的服務程序,在同一台機器上啟動兩個Elasticsearch執行個體(程序),就是兩個node節點。

Index

索引,具有相同結構的文檔的集合,類似于關系型資料庫的資料庫執行個體(6.0.0版本type廢棄後,索引的概念下降到等同于資料庫表的級别)。一個叢集中可以有多個索引。

Type

類型,在索引中内進行邏輯細分,在新版的Elasticsearch中已經廢棄。

Document

文檔,Elasticsearch中的最小的資料存儲單元,JSON資料格式,很多相同結構的文檔組成索引。文檔類似于關系型資料庫中表内的一行記錄。

Shard

分片,單個索引切分成多個shard,分布在多台Node節點上存儲。可以利用shard很好的橫向擴充,以存儲更多的資料,同時shard分布在多台node上,可以提升叢集整體的吞吐量和性能。在建立索引的時候可以直接指定分片的數量即可,一旦指定就不能再修改了。

Replica

索引副本,完全拷貝shard的内容,一個shard可以有一個或者多個replica,replica就是shard的資料拷貝,以提高備援。

replica承擔三個任務:

  • shard故障或者node當機時,其中的一個replica可以更新成shard
  • replica保證資料不丢失,保證高可用
  • replica可以分擔搜尋請求,提高叢集的吞吐和性能

shard的全稱叫primary shard,replica全稱叫replica shard,primary shard數量在建立索引時指定,後期不能修改,replica shard後期可以修改。預設每個索引的primary shard值為5,replica shard值為1,含義是5個primary shard,5個replica shard,共10個shard。是以Elasticsearch最小的高可用配置是2台伺服器。

Elasticsearch Note 說明:

​​參考官方​​

在ES叢集中的Note有如下幾種類型:

  • ​Master-eligible​

    ​​: ​

    ​node.master:true​

    ​的節點,使其有資格呗選舉為控制叢集的主節點。主節點負責叢集範圍内的輕量級操作,例如建立或删除索引,跟蹤哪些節點是叢集的一部分以及确定将哪些碎片配置設定給哪些節點
  • ​data​

    ​​: ​

    ​node.data:true​

    ​的節點,資料節點,儲存資料并執行與資料有關的操作,例如CRUD(增删改查),搜尋和聚合。
  • ​ingest​

    ​​: ​

    ​node.ingest:true​

    ​的節點,能夠将管道(Pipeline)應用于文檔,以便在建立是以之前轉換和豐富文檔。
  • ​machine-learning​

    ​​: ​

    ​xpack.ml.enabled​

    ​ and ​

    ​node.ml​

    ​ set to ​

    ​true​

    ​ ,适用于x-pack版本,OSS版本不能增加,否則無法啟動。
  • ​coordinating node​

    ​: 協調節點,諸如搜尋請求或批量索引請求之類的請求可能涉及儲存在不同資料節點上的資料。例如,搜尋請求在兩個階段中執行,由接收用戶端請求的節點(協調節點)進行協調。

    在分散階段,協調節點将請求轉發到儲存資料的資料節點。每個資料節點在本地執行該請求,并将其結果傳回給協調節點。在收集 階段,協調節點将每個資料節點的結果縮減為單個全局結果集。

    每個節點都隐式地是一個協調節點。這意味着,有三個節點​

    ​node.master​

    ​,​

    ​node.data​

    ​并​

    ​node.ingest​

    ​都設定為​

    ​false​

    ​隻充當一個協調節點,不能被禁用。結果,這樣的節點需要具有足夠的記憶體和CPU才能處理收集階段。

ingest

英 /ɪnˈdʒest/ 美 /ɪnˈdʒest/ 全球(美國)

vt. 攝取;咽下;吸收;接待

過去式 ingested過去分詞 ingested現在分詞 ingesting第三人稱單數 ingests

coordinating

英 /kəʊˈɔːdɪneɪtɪŋ/ 美 /koˈɔrdɪnetɪŋ/ 全球(英國)

v. (使)協調;協同動作;(衣服等)搭配;調節,協調;配合;與……形成共價鍵(coordinate 的現在分詞)

adj. 協調的;并列的;同位的;對等的

預設值:

  • ​node.master: ture​

  • ​node.voting_only: false​

  • ​node.data: true​

  • ​node.ml: true​

  • ​xpack.ml.enabled: true​

  • ​cluster.remote.connect: false​

Master-eligible,合格主節點,主合格節點

主節點負責叢集範圍内的輕量級操作,例如建立或删除索引,跟蹤哪些節點是叢集的一部分以及确定将哪些碎片配置設定給哪些節點。擁有穩定的主節點對于群集健康非常重要。

可以通過​​主選舉過程​​​來選舉不是​​僅投票​​節點的任何符合主資格的節點成為主節點。

索引和搜尋資料是占用大量CPU,記憶體和I / O的工作,這可能會對節點的資源造成壓力。為確定您的主節點穩定且不受壓力,在較大的群集中,最好将符合角色的專用主節點和專用資料節點分開。

雖然主節點也可以充當​​協調節點,​​ 并将搜尋和索引請求從用戶端路由到資料節點,但最好不要為此目的使用專用的主節點。對于符合主機要求的節點,其工作量應盡可能少,這對于群集的穩定性很重要。

設定節點成為主合格節點:

node.master: true 
node.voting_only: false 
node.data: false 
node.ingest: false 
node.ml: false 
xpack.ml.enabled: true 
cluster.remote.connect: false      

對于OSS版本:

node.master: true 
node.data: false 
node.ingest: false 
cluster.remote.connect: false      

僅投票節點

是參與投票過程,但是不能成為主節點的節點,隻投票節點在選舉中充當決勝局。

設定節點成為僅投票節點:

node.master: true 
node.voting_only: true 
node.data: false 
node.ingest: false 
node.ml: false 
xpack.ml.enabled: true 
cluster.remote.connect: false      

注意:

  • OSS版本不支援這個參數,如果設定了,将無法啟動。
  • 隻有符合主機資格的節點才能标記為僅投票。

高可用性(HA)群集至少需要三個主節點,其中至少兩個不是僅投票節點,可以将另一個節點設定成僅投票節點。這樣,即使其中一個節點發生故障,這樣的群集也将能夠選舉一個主節點。

資料節點

資料節點包含包含您已建立索引的文檔的分片。資料節點處理與資料相關的操作,例如CRUD,搜尋和聚合。這些操作是I / O,記憶體和CPU密集型的。監視這些資源并在過載時添加更多資料節點非常重要。

具有專用資料節點的主要好處是将主角色和資料角色分開。

要在預設分發中建立專用資料節點,請設定:

node.master: false 
node.voting_only: false 
node.data: true 
node.ingest: false 
node.ml: false 
cluster.remote.connect: false      

Ingest 節點

接收節點可以執行由一個或多個接收處理器組成的預處理管道。根據攝取處理器執行的操作類型和所需的資源,擁有專用的攝取節點可能有意義,該節點僅執行此特定任務。

要在預設分發中建立專用的攝取節點,請設定:

node.master: false 
node.voting_only: false 
node.data: false 
node.ingest: true 
node.ml: false 
cluster.remote.connect: false      

在OSS上設定:

node.master: false 
node.data: false 
node.ingest: true 
cluster.remote.connect: false      

僅協調節點

如果您不具備處理主要職責,儲存資料和預處理文檔的能力,那麼您将擁有一個僅可路由請求,處理搜尋縮減階段并配置設定批量索引的協調節點。本質上,僅協調節點充當智能負載平衡器。

僅協調節點可以通過從資料和符合資格的主節點上解除安裝協調節點角色來使大型叢集受益。他們像其他節點一樣加入叢集并接收完整的​​叢集狀态​​,并且使用叢集狀态将請求直接路由到适當的位置。

在叢集中添加過多的僅協調節點會增加整個叢集的負擔,因為選擇的主節點必須等待每個節點的叢集狀态更新确認!僅協調節點的好處不應被誇大-資料節點也可以很好地達到相同的目的。

設定僅協調節點:

node.master: false 
node.voting_only: false 
node.data: false 
node.ingest: false 
node.ml: false 
cluster.remote.connect: false      

在OSS上設定:

node.master: false 
node.data: false 
node.ingest: false 
cluster.remote.connect: false      

機器學習節點

機器學習功能提供了機器學習節點,該節點運作作業并處理機器學習API請求。如果​

​xpack.ml.enabled​

​​設定為true且​

​node.ml​

​​設定為​

​false​

​,則該節點可以處理API請求,但不能運作作業。

如果要在群集中使用機器學習功能,則必須在所有符合主機資格的節點上啟用機器學習(設定​

​xpack.ml.enabled​

​​為​

​true​

​)。如果您隻有OSS發行版,請不要使用這些設定。

有關這些設定的更多資訊,請參閱​​機器學習設定​​。

要在預設分發中建立專用的機器學習節點,請設定:

node.master: false 
node.voting_only: false 
node.data: false 
node.ingest: false 
node.ml: true 
xpack.ml.enabled: true 
cluster.remote.connect: false      

配置Elasticsearch

拷貝三台ES目錄:

$ ls
elasticsearch-7.4.2
$ mv elasticsearch-7.4.2{,-01}
$ ls
elasticsearch-7.4.2-01
$ cp -a elasticsearch-7.4.2-01 elasticsearch-7.4.2-02
$ cp -a elasticsearch-7.4.2-01 elasticsearch-7.4.2-03
$ ln -s elasticsearch-7.4.2-01 es01
$ ln -s elasticsearch-7.4.2-02 es02
$ ln -s elasticsearch-7.4.2-03 es03
$ ll
total 0
drwxr-xr-x 10 ec2-user ec2-user 166 Nov 26 14:24 elasticsearch-7.4.2-01
drwxr-xr-x 10 ec2-user ec2-user 166 Nov 26 14:24 elasticsearch-7.4.2-02
drwxr-xr-x 10 ec2-user ec2-user 166 Nov 26 14:24 elasticsearch-7.4.2-03
lrwxrwxrwx  1 ec2-user ec2-user  22 Nov 26 15:00 es01 -> elasticsearch-7.4.2-01
lrwxrwxrwx  1 ec2-user ec2-user  22 Nov 26 15:00 es02 -> elasticsearch-7.4.2-02
lrwxrwxrwx  1 ec2-user ec2-user  22 Nov 26 15:00 es03 -> elasticsearch-7.4.2-03      

配置Elasticsearch 名稱解析

我這裡直接使用​

​hosts​

​檔案:

cat >> /etc/hosts <<EOF
172.17.0.87 es01 es02 es03
EOF      

編輯ES配置檔案​

​config/elasticsearch.yml​

預設的配置檔案在​

​$ES_HOME/config/elasticsearch.yml​

​ 中,配置檔案是以yaml的格式配置,其中有三種配置方式:

path:
    data: /var/lib/elasticsearch
    logs: /var/log/elasticsearch

或者寫成單行的格式:
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
再或者通過環境變量的方式:這種方式在Docker,Kubernetes環境中很有用。
node.name:    ${HOSTNAME}
network.host: ${ES_NETWORK_HOST}      

Elasticsearch 配置詳解

配置ES 的PATH路徑,​

​path.data​

​​ & ​

​path.logs​

如果不配置預設是​

​$ES_HOME​

​​中的子目錄​

​data​

​​,​

​logs​

​。

path:
  logs: /var/log/elasticsearch
  data: /var/data/elasticsearch      

​path.data​

​,可以設定多個目錄:

path:
  logs: /data/ES01/logs
  data:
    - /data/ES01-A
    - /data/ES01-B
    - /data/ES01-C      
配置ES叢集名稱:​

​cluster.name​

一個node節點隻能加入一個叢集當中,不同的節點配置同一個​

​cluster.name​

​​可以組成ES叢集。請確定不同的cluster叢集中使用不同的​

​cluster.name​

​:

cluster.name: logging-prod      
配置ES節點名稱:​

​node.name​

​node.name​

​代表節點名稱,是人類可讀用于區分node節點;如果不配置,預設是主機名

node.name: prod-data-002      
配置ES節點監聽位址:​

​network.host​

如果不配置,預設是監聽在​

​127.0.0.1​

​​ 和 ​

​[::1]​

​,同時以development的方式啟動。

# 監聽在指定IP上
network.host: 192.168.1.10

# 監聽在所有的IP上
network.host: 0.0.0.0      

​network.host​

​ 可用的配置:

​_[networkInterface]_​

Addresses of a network interface, for example ​

​_eth0_​

​. 指定網卡

​_local_​

Any loopback addresses on the system, for example ​

​127.0.0.1​

​. 本地回環IP

​_site_​

Any site-local addresses on the system, for example ​

​192.168.0.1​

​. 内網IP

​_global_​

Any globally-scoped addresses on the system, for example ​

​8.8.8.8​

​. 公網IP
配置ES節點的發現和叢集組成設定

這裡主要有兩個主要的配置:發現和叢集組成設定,叢集間的node節點可以實作彼此發現、選舉主節點,進而組成ES叢集。

​discovery.seed_hosts​

如果不配置,預設ES在啟動的時候會監聽本地回環位址,同時會掃描本地端口:​

​9300-9305​

​,用于發現在本機啟動的其他節點。

是以如果不進行的任何配置,将$ES_HOME目錄拷貝三份,然後全部啟動,預設也是可以組成ES叢集的,用于測試使用。 如果你需要在多台機器上啟動ES節點,以便組成叢集,那麼這個參數必須配置,以便nodes之間能夠發現彼此。

​discovery.seed_hosts​

​是一個清單,多個元素用逗号隔開,元素可以寫成:

  • host:port,指定自定義的transport叢集間通信端口
  • host,使用預設的transport叢集間通信端口:9300-9400;​​參考​​
  • 域名,可以解析成多個IP,會自動的與每個解析到的IP去連接配接測試
  • 其他自定義可以解析的名稱

​cluster.initial_master_nodes​

在deveplopment模式中是一台主機上自動發現的nodes彼此之間自動配置的。但是在生産的模式中必須要配置。

這個參數用于在新的叢集第一次啟動的時使用,以指定可以參與選舉合格主節點清單(node.master: true)。在叢集重新開機或者增加新節點的時候這個參數不起作用,因為在每個node節點上都已經儲存有叢集的狀态資訊。

​cluster.initial_master_nodes​

​​也是一個清單,多個元素用逗号隔開,元素可以寫成:​​參考​​

  • 配置的node.name名稱。
  • 如果沒有配置node.name,那麼使用完整主機名
  • FQDN
  • host,如果沒有配置node.name,使用​

    ​network.host​

    ​配置的公開位址
  • host:port 如果沒有配置node.name,這裡的端口是transport端口
ES節點http和transport的配置

​http​

​​ 和 ​

​transport​

​。

配置參考:​​http​​​,​​transport​​

http用于暴露Elasticsearch的API,便于client端與ES通信;transport用于ES叢集間節點通信使用。

http 配置參考:

SettingDescription​

​http.port​

​ http端口配置A bind port range. Defaults to ​

​9200-9300​

​.​

​http.publish_port​

​The port that HTTP clients should use when communicating with this node. Useful when a cluster node is behind a proxy or firewall and the ​

​http.port​

​ is not directly addressable from the outside. Defaults to the actual port assigned via ​

​http.port​

​.​

​http.bind_host​

​ http監聽的IPThe host address to bind the HTTP service to. Defaults to ​

​http.host​

​ (if set) or ​

​network.bind_host​

​.​

​http.publish_host​

​The host address to publish for HTTP clients to connect to. Defaults to ​

​http.host​

​ (if set) or ​

​network.publish_host​

​.​

​http.host​

​Used to set the ​

​http.bind_host​

​ and the ​

​http.publish_host​

​.​

​http.max_content_length​

​The max content of an HTTP request. Defaults to ​

​100mb​

​.​

​http.max_initial_line_length​

​The max length of an HTTP URL. Defaults to ​

​4kb​

​​

​http.max_header_size​

​The max size of allowed headers. Defaults to ​

​8kB​

​​

​http.compression​

​ 壓縮Support for compression when possible (with Accept-Encoding). Defaults to ​

​true​

​.​

​http.compression_level​

​ 壓縮級别Defines the compression level to use for HTTP responses. Valid values are in the range of 1 (minimum compression) and 9 (maximum compression). Defaults to ​

​3​

​.​

​http.cors.enabled​

​ 跨域配置Enable or disable cross-origin resource sharing, i.e. whether a browser on another origin can execute requests against Elasticsearch. Set to ​

​true​

​ to enable Elasticsearch to process pre-flight ​​CORS​​ requests. Elasticsearch will respond to those requests with the ​

​Access-Control-Allow-Origin​

​ header if the ​

​Origin​

​ sent in the request is permitted by the ​

​http.cors.allow-origin​

​ list. Set to ​

​false​

​ (the default) to make Elasticsearch ignore the ​

​Origin​

​ request header, effectively disabling CORS requests because Elasticsearch will never respond with the ​

​Access-Control-Allow-Origin​

​ response header. Note that if the client does not send a pre-flight request with an ​

​Origin​

​ header or it does not check the response headers from the server to validate the ​

​Access-Control-Allow-Origin​

​ response header, then cross-origin security is compromised. If CORS is not enabled on Elasticsearch, the only way for the client to know is to send a pre-flight request and realize the required response headers are missing.​

​http.cors.allow-origin​

​Which origins to allow. Defaults to no origins allowed. If you prepend and append a ​

​/​

​ to the value, this will be treated as a regular expression, allowing you to support HTTP and HTTPs. for example using ​

​/https?:\/\/localhost(:[0-9]+)?/​

​ would return the request header appropriately in both cases. ​

​*​

​ is a valid value but is considered a security risk as your Elasticsearch instance is open to cross origin requests from anywhere.​

​http.cors.max-age​

​Browsers send a “preflight” OPTIONS-request to determine CORS settings. ​

​max-age​

​ defines how long the result should be cached for. Defaults to ​

​1728000​

​ (20 days)​

​http.cors.allow-methods​

​Which methods to allow. Defaults to ​

​OPTIONS, HEAD, GET, POST, PUT, DELETE​

​.​

​http.cors.allow-headers​

​Which headers to allow. Defaults to ​

​X-Requested-With, Content-Type, Content-Length​

​.​

​http.cors.allow-credentials​

​Whether the ​

​Access-Control-Allow-Credentials​

​ header should be returned. Note: This header is only returned, when the setting is set to ​

​true​

​. Defaults to ​

​false​

​​

​http.detailed_errors.enabled​

​Enables or disables the output of detailed error messages and stack traces in response output. Note: When set to ​

​false​

​ and the ​

​error_trace​

​ request parameter is specified, an error will be returned; when ​

​error_trace​

​ is not specified, a simple message will be returned. Defaults to ​

​true​

​​

​http.pipelining.max_events​

​The maximum number of events to be queued up in memory before an HTTP connection is closed, defaults to ​

​10000​

​.​

​http.max_warning_header_count​

​The maximum number of warning headers in client HTTP responses, defaults to unbounded.​

​http.max_warning_header_size​

​The maximum total size of warning headers in client HTTP responses, defaults to unbounded.

transport 配置參考:

SettingDescription​

​transport.port​

​ transport端口A bind port range. Defaults to ​

​9300-9400​

​.​

​transport.publish_port​

​The port that other nodes in the cluster should use when communicating with this node. Useful when a cluster node is behind a proxy or firewall and the ​

​transport.port​

​ is not directly addressable from the outside. Defaults to the actual port assigned via ​

​transport.port​

​.​

​transport.bind_host​

​ transport監聽的IPThe host address to bind the transport service to. Defaults to ​

​transport.host​

​ (if set) or ​

​network.bind_host​

​.​

​transport.publish_host​

​The host address to publish for nodes in the cluster to connect to. Defaults to ​

​transport.host​

​ (if set) or ​

​network.publish_host​

​.​

​transport.host​

​Used to set the ​

​transport.bind_host​

​ and the ​

​transport.publish_host​

​.​

​transport.connect_timeout​

​The connect timeout for initiating a new connection (in time setting format). Defaults to ​

​30s​

​.​

​transport.compress​

​Set to ​

​true​

​ to enable compression (​

​DEFLATE​

​) between all nodes. Defaults to ​

​false​

​.​

​transport.ping_schedule​

​Schedule a regular application-level ping message to ensure that transport connections between nodes are kept alive. Defaults to ​

​5s​

​ in the transport client and ​

​-1​

​ (disabled) elsewhere. It is preferable to correctly configure TCP keep-alives instead of using this feature, because TCP keep-alives apply to all kinds of long-lived connections and not just to transport connections.
配置ES節點的JVM設定

預設的JVM配置檔案是:​

​$ES_HOME/config/jvm.options​

# 配置記憶體占用最大最小都為1G。
$ vim jvm.options
-Xms1g
-Xmx1g      

注意:

生産環境,請根據實際情況進行設定。同時不同的角色需要設定不同的資源大小。

建議不要超過32GB,如果有足夠的記憶體建議配置在26G-30G。​​參考​​

此時的JVM也可以通過環境變量的方式設定:

$ export ES_JAVA_OPTS="-Xms1g -Xmx1g $ES_JAVA_OPTS"      

說明:

  • ​node.attr.xxx: yyy​

    ​ 用于設定這台node節點的屬性,比如機架,可用區,或者以後可以設定冷熱資料的分别存儲都是基于這個。
  • 因為我的環境中隻用了一台主機,是以采用了區分端口的方式。分别配置了​

    ​http.port​

    ​​,​

    ​transport.tcp.port​

  • 我這裡的服務發現使用的是自定義可解析名稱,通過在​

    ​/etc/hosts​

    ​ 指定解析完成的,友善後期更換IP位址。
  • 我這裡的三台node節點,在初次啟動時都可以競選主節點,生産環境要注意選擇合格主節點``node.master: true`

es01

$ cat es01/config/elasticsearch.yml |grep -Ev "^$|^#"
cluster.name: es-cluster01
node.name: es01
node.attr.rack: r1
node.attr.zone: A
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
transport.tcp.port: 9331
discovery.seed_hosts: ["es02:9332", "es03:9333"]
cluster.initial_master_nodes: ["es01", "es02", "es03"]      

es02

$ cat es02/config/elasticsearch.yml |grep -Ev "^$|^#"
cluster.name: es-cluster01
node.name: es02
node.attr.rack: r1
node.attr.zone: B
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9201
transport.tcp.port: 9332
discovery.seed_hosts: ["es01:9331", "es03:9333"]
cluster.initial_master_nodes: ["es01", "es02", "es03"]      

es03

$ cat es03/config/elasticsearch.yml |grep -Ev "^$|^#"
cluster.name: es-cluster01
node.name: es03
node.attr.rack: r1
node.attr.zone: C
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9202
transport.tcp.port: 9333
discovery.seed_hosts: ["es02:9332", "es01:9331"]
cluster.initial_master_nodes: ["es01", "es02", "es03"]      

啟動Elasticsearch

首先檢視一下Elasticsearch的指令幫助:

$ ./es01/bin/elasticsearch --help
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
starts elasticsearch

Option                Description                                               
------                -----------                                               
-E <KeyValuePair>     Configure a setting                                       
-V, --version         Prints elasticsearch version information and exits        
-d, --daemonize       Starts Elasticsearch in the background     # 背景啟動                
-h, --help            show help                                                 
-p, --pidfile <Path>  Creates a pid file in the specified path on start     # 指定pid檔案
-q, --quiet           Turns off standard output/error streams logging in console  # 安靜的方式
-s, --silent          show minimal output                                       
-v, --verbose         show verbose output      

分别啟動三台ES:

$ ll
total 0
drwxr-xr-x 10 ec2-user ec2-user 166 Nov 26 14:24 elasticsearch-7.4.2-01
drwxr-xr-x 10 ec2-user ec2-user 166 Nov 26 14:24 elasticsearch-7.4.2-02
drwxr-xr-x 10 ec2-user ec2-user 166 Nov 26 14:24 elasticsearch-7.4.2-03
lrwxrwxrwx  1 ec2-user ec2-user  22 Nov 26 15:00 es01 -> elasticsearch-7.4.2-01
lrwxrwxrwx  1 ec2-user ec2-user  22 Nov 26 15:00 es02 -> elasticsearch-7.4.2-02
lrwxrwxrwx  1 ec2-user ec2-user  22 Nov 26 15:00 es03 -> elasticsearch-7.4.2-03

$ ./es01/bin/elasticsearch &
$ ./es02/bin/elasticsearch &
$ ./es03/bin/elasticsearch &      

可以通過在​

​$ES_HOME/logs/\<CLUSTER_NAME\>.log​

​ 檢視日志。

測試,我們來檢視一下叢集中的節點:

$ curl localhost:9200/_cat/nodes?v
ip          heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
172.17.0.87           32          92  15    0.01    0.04     0.17 dilm      -      es03
172.17.0.87           17          92  15    0.01    0.04     0.17 dilm      *      es02
172.17.0.87           20          92  15    0.01    0.04     0.17      

檢視叢集的健康狀況:

分為三種狀态:

  • green,綠色,代表所有資料都健康。
  • yellow,黃色,代表資料部分正常,但是沒有資料丢失,可以恢複到green。
  • red,紅色,代表有資料丢失,且無法恢複了。
$ curl localhost:9200
{
  "name" : "es01", # 目前節點名稱
  "cluster_name" : "es-cluster01", # 叢集名稱
  "cluster_uuid" : "n7DDNexcTDik5mU9Y_qrcA",
  "version" : { # 版本
    "number" : "7.4.2",
    "build_flavor" : "default",
    "build_type" : "tar",
    "build_hash" : "2f90bbf7b93631e52bafb59b3b049cb44ec25e96",
    "build_date" : "2019-10-28T20:40:44.881551Z",
    "build_snapshot" : false,
    "lucene_version" : "8.2.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

$ curl localhost:9200/_cat/health
1574835925 06:25:25 es-cluster01 green 3 3 0 0 0 0 0 0 - 100.0%

$ curl localhost:9200/_cat/health?v
epoch      timestamp cluster      status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1574835928 06:25:28  es-cluster01 green           3         3      0   0    0    0        0             0                  -                100.0%      

檢視所有​

​/_cat​

​接口:

$ curl localhost:9200/_cat
=^.^=
/_cat/allocation
/_cat/shards
/_cat/shards/{index}
/_cat/master
/_cat/nodes
/_cat/tasks
/_cat/indices
/_cat/indices/{index}
/_cat/segments
/_cat/segments/{index}
/_cat/count
/_cat/count/{index}
/_cat/recovery
/_cat/recovery/{index}
/_cat/health
/_cat/pending_tasks
/_cat/aliases
/_cat/aliases/{alias}
/_cat/thread_pool
/_cat/thread_pool/{thread_pools}
/_cat/plugins
/_cat/fielddata
/_cat/fielddata/{fields}
/_cat/nodeattrs
/_cat/repositories
/_cat/snapshots/{repository}
/_cat/templates      

檢視我們之前給每台機器定義的屬性:

$ curl localhost:9200/_cat/nodeattrs
es03 172.17.0.87 172.17.0.87 ml.machine_memory 16673112064
es03 172.17.0.87 172.17.0.87 rack              r1 # 自定義的
es03 172.17.0.87 172.17.0.87 ml.max_open_jobs  20
es03 172.17.0.87 172.17.0.87 xpack.installed   true
es03 172.17.0.87 172.17.0.87 zone              C # 自定義的
es02 172.17.0.87 172.17.0.87 ml.machine_memory 16673112064
es02 172.17.0.87 172.17.0.87 rack              r1 # 自定義的
es02 172.17.0.87 172.17.0.87 ml.max_open_jobs  20
es02 172.17.0.87 172.17.0.87 xpack.installed   true
es02 172.17.0.87 172.17.0.87 zone              B # 自定義的
es01 172.17.0.87 172.17.0.87 ml.machine_memory 16673112064
es01 172.17.0.87 172.17.0.87 rack              r1 # 自定義的
es01 172.17.0.87 172.17.0.87 ml.max_open_jobs  20
es01 172.17.0.87 172.17.0.87 xpack.installed   true
es01 172.17.0.87 172.17.0.87      

我們發現,所有的這些API接口都是能夠直接通路的,不需要任何的認證的,對于生産來說非常的不安全,同時任一台node節點都可以加入到叢集中,這些都非常的不安全;下面介紹如果開啟auth以及node間的ssl認證。

開啟ES叢集的Auth認證和Node間SSL

開啟ES叢集的Auth認證

在最新版的ES中,已經開源了X-pack元件,但是開源 != 免費,但是一些基礎的安全是免費的,例如本例中的Auth以及Node間SSL就是免費的。

首先我們嘗試生成密碼:指令是​

​$ES_HOME/bin/elasticsearch-setup-passwords​

​,檢視一下幫助:

$ ./es01/bin/elasticsearch-setup-passwords --help
Sets the passwords for reserved users

Commands
--------
auto - Uses randomly generated passwords
interactive - Uses passwords entered by a user

Non-option arguments:
command              

Option         Description        
------         -----------        
-h, --help     show help          
-s, --silent   show minimal output
-v, --verbose  show verbose output

# 自動生成密碼,發現失敗
$ ./es01/bin/elasticsearch-setup-passwords auto

Unexpected response code [500] from calling GET http://172.17.0.87:9200/_security/_authenticate?pretty
It doesn't look like the X-Pack security feature is enabled on this Elasticsearch node.
Please check if you have enabled X-Pack security in your elasticsearch.yml configuration file.

ERROR: X-Pack Security is      

我們檢視一些ES01的日志,發現有報錯:

[2019-11-27T14:35:13,391][WARN ][r.suppressed             ] [es01] path: /_security/_authenticate, params: {pretty=}
org.elasticsearch.ElasticsearchException: Security must be explicitly enabled when using a [basic] license. Enable security by setting [xpack.security.enabled] to [true] in the elasticsearch.yml file and restart the node.
......      

提示說需要先開啟安全:

我們按照提示分别的三台ES節點上添加如下資訊:

$ echo "xpack.security.enabled: true" >> es01/config/elasticsearch.yml
$ echo "xpack.security.enabled: true" >> es02/config/elasticsearch.yml
$ echo "xpack.security.enabled: true"      

然後重新開機:

$ ps -ef|grep elasticsearch
# 擷取到es節點的pid分别kill即可,注意不要用-9      

發現無法啟動,錯誤提示:

ERROR: [1] bootstrap checks failed
[1]: Transport SSL must be enabled if security is enabled on a [basic] license. Please set [xpack.security.transport.ssl.enabled] to [true] or disable security by setting [xpack.security.enabled] to [false]      

好吧我們再添加這條配置:

$ echo "xpack.security.transport.ssl.enabled: true" >> es01/config/elasticsearch.yml
$ echo "xpack.security.transport.ssl.enabled: true" >> es02/config/elasticsearch.yml
$ echo "xpack.security.transport.ssl.enabled: true"      

然後再次啟動,我們又發現,在啟動第二台的時候,兩個es節點都一直報錯,如下:

[2019-11-27T14:50:58,643][WARN ][o.e.t.TcpTransport       ] [es01] exception caught on transport layer [Netty4TcpChannel{localAddress=/172.17.0.87:9331, remoteAddress=/172.17.0.87:56654}], closing connection
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: No available authentication scheme
4at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:475) ~[netty-codec-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:283) ~[netty-codec-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1421) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:697) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:597) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:551) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:511) [netty-transport-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:918) [netty-common-4.1.38.Final.jar:4.1.38.Final]
4at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.38.Final.jar:4.1.38.Final]
4at java.lang.Thread.run(Thread.java:830) [?:?]
Caused by: javax.net.ssl.SSLHandshakeException: No available authentication scheme
4at sun.security.ssl.Alert.createSSLException(Alert.java:131) ~[?:?]
......      

發現沒有配置認證的方式。好吧,我們先往下繼續配置:

配置Node間SSL

注意:這裡是指配置ES叢集節點間transport的SSL認證,對于ES節點的HTTP API接口并沒有配置,是以通過API通路ES時不需要提供證書。

參考官網:

​​https://www.elastic.co/guide/en/elasticsearch/reference/current/ssl-tls.html​​

​​https://www.elastic.co/guide/en/elasticsearch/reference/7.4/configuring-tls.html​​

建立SSL/TLS證書:通過指令​

​$ES_HOME/bin/elasticsearch-certutil​

# 檢視指令幫助
$ ./es01/bin/elasticsearch-certutil --help
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.bouncycastle.jcajce.provider.drbg.DRBG (file:/opt/elk74/elasticsearch-7.4.2-01/lib/tools/security-cli/bcprov-jdk15on-1.61.jar) to constructor sun.security.provider.Sun()
WARNING: Please consider reporting this to the maintainers of org.bouncycastle.jcajce.provider.drbg.DRBG
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Simplifies certificate creation for use with the Elastic Stack

Commands
--------
csr - generate certificate signing requests
cert - generate X.509 certificates and keys
ca - generate a new local certificate authority

Non-option arguments:
command              

Option         Description        
------         -----------        
-h, --help     show help          
-s, --silent   show minimal output
-v, --verbose  show verbose output      

建立CA憑證:

# 指令幫助:
$ ./bin/elasticsearch-certutil ca --help
generate a new local certificate authority

Option               Description                                             
------               -----------                                             
-E <KeyValuePair>    Configure a setting                                     
--ca-dn              distinguished name to use for the generated ca. defaults
                       to CN=Elastic Certificate Tool Autogenerated CA       
--days <Integer>     number of days that the generated certificates are valid
-h, --help           show help                                               
--keysize <Integer>  size in bits of RSA keys                                
--out                path to the output file that should be produced         
--pass               password for generated private keys                     
--pem                output certificates and keys in PEM format instead of PKCS#12                                               ## 預設建立PKCS#12格式的,使用--pem可以建立pem格式的,key,crt,ca分開的。
-s, --silent         show minimal output                                     
-v, --verbose        show verbose output

# 建立ca證書
$ ./es01/bin/elasticsearch-certutil ca -v
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.

The 'ca' mode generates a new 'certificate authority'
This will create a new X.509 certificate and private key that can be used
to sign certificate when running in 'cert' mode.

Use the 'ca-dn' option if you wish to configure the 'distinguished name'
of the certificate authority

By default the 'ca' mode produces a single PKCS#12 output file which holds:
    * The CA certificate
    * The CA's private key

If you elect to generate PEM format certificates (the -pem option), then the output will
be a zip file containing individual files for the CA certificate and private key

Please enter the desired output file [elastic-stack-ca.p12]:  # 輸入儲存的ca檔案名稱
Enter password for elastic-stack-ca.p12 : # 輸入證書密碼,我們這裡留白

# 預設的CA憑證存放在$ES_HOME 目錄中
$ ll es01/
total 560
drwxr-xr-x  2 ec2-user ec2-user   4096 Oct 29 04:45 bin
drwxr-xr-x  2 ec2-user ec2-user    178 Nov 27 13:45 config
drwxrwxr-x  3 ec2-user ec2-user     19 Nov 27 13:46 data
-rw-------  1 ec2-user ec2-user   2527 Nov 27 15:05 elastic-stack-ca.p12 # 這裡呢
drwxr-xr-x  9 ec2-user ec2-user    107 Oct 29 04:45 jdk
drwxr-xr-x  3 ec2-user ec2-user   4096 Oct 29 04:45 lib
-rw-r--r--  1 ec2-user ec2-user  13675 Oct 29 04:38 LICENSE.txt
drwxr-xr-x  2 ec2-user ec2-user   4096 Nov 27 14:48 logs
drwxr-xr-x 37 ec2-user ec2-user   4096 Oct 29 04:45 modules
-rw-r--r--  1 ec2-user ec2-user 523209 Oct 29 04:45 NOTICE.txt
drwxr-xr-x  2 ec2-user ec2-user      6 Oct 29 04:45 plugins
-rw-r--r--  1 ec2-user ec2-user   8500 Oct 29 04:38      

這個指令生成格式為​

​PKCS#12​

​​名稱為 ​

​elastic-stack-ca.p12​

​ 的keystore檔案,包含CA憑證和私鑰。

建立節點間認證用的證書:

# 指令幫助:
$ ./bin/elasticsearch-certutil cert --help
generate X.509 certificates and keys

Option               Description                                             
------               -----------                                             
-E <KeyValuePair>    Configure a setting                                     
--ca                 path to an existing ca key pair (in PKCS#12 format)     
--ca-cert            path to an existing ca certificate                      
--ca-dn              distinguished name to use for the generated ca. defaults
                       to CN=Elastic Certificate Tool Autogenerated CA       
--ca-key             path to an existing ca private key                      
--ca-pass            password for an existing ca private key or the generated
                       ca private key                                        
--days <Integer>     number of days that the generated certificates are valid
--dns                comma separated DNS names   # 指定dns,域名
-h, --help           show help                                               
--in                 file containing details of the instances in yaml format 
--ip                 comma separated IP addresses   # 指定IP
--keep-ca-key        retain the CA private key for future use                
--keysize <Integer>  size in bits of RSA keys                                
--multiple           generate files for multiple instances                   
--name               name of the generated certificate                       
--out                path to the output file that should be produced         
--pass               password for generated private keys                     
--pem                output certificates and keys in PEM format instead of   
                       PKCS#12                                               
-s, --silent         show minimal output                                     
-v, --verbose        show verbose output

# 建立node證書
$ cd es01
$ ./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.

The 'cert' mode generates X.509 certificate and private keys.
    * By default, this generates a single certificate and key for use
       on a single instance.
    * The '-multiple' option will prompt you to enter details for multiple
       instances and will generate a certificate and key for each one
    * The '-in' option allows for the certificate generation to be automated by describing
       the details of each instance in a YAML file

    * An instance is any piece of the Elastic Stack that requires an SSL certificate.
      Depending on your configuration, Elasticsearch, Logstash, Kibana, and Beats
      may all require a certificate and private key.
    * The minimum required value for each instance is a name. This can simply be the
      hostname, which will be used as the Common Name of the certificate. A full
      distinguished name may also be used.
    * A filename value may be required for each instance. This is necessary when the
      name would result in an invalid file or directory name. The name provided here
      is used as the directory name (within the zip) and the prefix for the key and
      certificate files. The filename is required if you are prompted and the name
      is not displayed in the prompt.
    * IP addresses and DNS names are optional. Multiple values can be specified as a
      comma separated string. If no IP addresses or DNS names are provided, you may
      disable hostname verification in your SSL configuration.

    * All certificates generated by this tool will be signed by a certificate authority (CA).
    * The tool can automatically generate a new CA for you, or you can provide your own with the
         -ca or -ca-cert command line options.

By default the 'cert' mode produces a single PKCS#12 output file which holds:
    * The instance certificate
    * The private key for the instance certificate
    * The CA certificate

If you specify any of the following options:
    * -pem (PEM formatted output)
    * -keep-ca-key (retain generated CA key)
    * -multiple (generate multiple certificates)
    * -in (generate certificates from an input file)
then the output will be be a zip file containing individual certificate/key files

Enter password for CA (elastic-stack-ca.p12) :  # 輸入CA憑證的密碼,我們這裡沒有設定,直接回車
Please enter the desired output file [elastic-certificates.p12]:  # 輸入證書儲存名稱,保值預設直接回車
Enter password for elastic-certificates.p12 :  # 輸入證書的密碼,留白,直接回車

Certificates written to /opt/elk74/elasticsearch-7.4.2-01/elastic-certificates.p12 # 存放位置

This file should be properly secured as it contains the private key for 
your instance.

This file is a self contained file and can be copied and used 'as is'
For each Elastic product that you wish to configure, you should copy
this '.p12' file to the relevant configuration directory
and then follow the SSL configuration instructions in the product guide.

For client applications, you may only need to copy the CA certificate and
configure the client to trust this certificate.
$ ll
total 564
drwxr-xr-x  2 ec2-user ec2-user   4096 Oct 29 04:45 bin
drwxr-xr-x  2 ec2-user ec2-user    178 Nov 27 13:45 config
drwxrwxr-x  3 ec2-user ec2-user     19 Nov 27 13:46 data
-rw-------  1 ec2-user ec2-user   3451 Nov 27 15:10 elastic-certificates.p12 # 這裡
-rw-------  1 ec2-user ec2-user   2527 Nov 27 15:05 elastic-stack-ca.p12 # 還有這裡
drwxr-xr-x  9 ec2-user ec2-user    107 Oct 29 04:45 jdk
drwxr-xr-x  3 ec2-user ec2-user   4096 Oct 29 04:45 lib
-rw-r--r--  1 ec2-user ec2-user  13675 Oct 29 04:38 LICENSE.txt
drwxr-xr-x  2 ec2-user ec2-user   4096 Nov 27 14:48 logs
drwxr-xr-x 37 ec2-user ec2-user   4096 Oct 29 04:45 modules
-rw-r--r--  1 ec2-user ec2-user 523209 Oct 29 04:45 NOTICE.txt
drwxr-xr-x  2 ec2-user ec2-user      6 Oct 29 04:45 plugins
-rw-r--r--  1 ec2-user ec2-user   8500 Oct 29 04:38      

這個指令生成格式為​

​PKCS#12​

​​名稱為 ​

​elastic-certificates.p12​

​ 的keystore檔案,包含node證書、私鑰、CA憑證。

這個指令生成的證書内部預設是不包含主機名資訊的(他沒有任何 Subject Alternative Name 字段),是以證書可以用在任何的node節點上,但是你必須配置elasticsearch關閉主機名認證。

配置ES節點使用這個證書:

$ mkdir config/certs
$ mv elastic-* config/certs/
$ ll config/certs/
total 8
-rw------- 1 ec2-user ec2-user 3451 Nov 27 15:10 elastic-certificates.p12
-rw------- 1 ec2-user ec2-user 2527 Nov 27 15:05 elastic-stack-ca.p12

# 拷貝這個目錄到所有的ES節點中
$ cp -a config/certs /opt/elk74/es02/config/
$ cp -a config/certs /opt/elk74/es03/config/

# 配置elasticsearch.yml配置檔案,注意所有的node節點都需要配置,這裡的配置是使用PKCS#12格式的證書。
$ vim es01/config/elasticsearch.yml
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate #認證方式使用證書
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12

# 如果你使用--pem生成PEM格式的,那麼需要使用如下的配置:
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate 
xpack.security.transport.ssl.key: /home/es/config/node01.key # 私鑰
xpack.security.transport.ssl.certificate: /home/es/config/node01.crt # 證書
xpack.security.transport.ssl.certificate_authorities: [ "/home/es/config/ca.crt" ]  # ca證書

# 如果你生成的node證書設定了password,那麼需要把password加入到elasticsearch 的keystore
## PKCS#12格式:
bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password

## PEM格式
bin/elasticsearch-keystore add xpack.security.transport.ssl.secure_key_passphrase      

注意:config/certs 目錄中不需要拷貝CA憑證檔案,隻拷貝cert檔案即可。我這裡是圖友善。

同時要注意把CA憑證儲存好,如果設定了CA憑證密鑰也要保護放,友善後期增加ES節點使用。

xpack.security.transport.ssl.verification_mode 這裡配置認證方式:​​參考官網​​

  • ​full​

    ​,認證證書是否通過信任的CA憑證簽發的,同時認證server的hostname or IP address是否比對證書中配置的。
  • ​certificate​

    ​,我們這裡采用的方式,隻認證證書是否通過信任的CA憑證簽發的
  • ​none​

    ​,什麼也不認證,相當于關閉了SSL/TLS 認證,僅用于你非常相信安全的環境。

配置了,然後再次啟動ES節點測試:

測試能夠正常啟動了。好了,我們再來繼續之前的生成密碼:在随意一台節點即可。

$ ./es01/bin/elasticsearch-setup-passwords auto
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
The passwords will be randomly generated and printed to the console.
Please confirm that you would like to continue [y/N]y #輸入y,确認繼續


Changed password for user apm_system
PASSWORD apm_system = yc0GJ9QS4AP69pVzFKiX

Changed password for user kibana
PASSWORD kibana = UKuHceHWudloJk9NvHlX

Changed password for user logstash_system
PASSWORD logstash_system = N6pLSkNSNhT0UR6radrZ

Changed password for user beats_system
PASSWORD beats_system = BmsiDzgx1RzqHIWTri48

Changed password for user remote_monitoring_user
PASSWORD remote_monitoring_user = dflPnqGAQneqjhU1XQiZ

Changed password for user elastic
PASSWORD elastic = Tu8RPllSZz6KXkgZWFHv      

檢視叢集節點數量:

$ curl -u elastic localhost:9200/_cat/nodes
Enter host password for user 'elastic': # 輸入elastic使用者的密碼:Tu8RPllSZz6KXkgZWFHv
172.17.0.87 14 92 18 0.16 0.11 0.37 dilm - es02
172.17.0.87  6 92 17 0.16 0.11 0.37 dilm - es03
172.17.0.87  8 92 19 0.16 0.11 0.37      

注意:

這裡隻是配置了ES叢集中node間通信啟用了證書加密,HTTP API接口是使用使用者名和密碼的方式認證的,如果你需要更安全的SSL加密,請參考:​​TLS HTTP​​。

安全配置的參數,​​請參考​​

好了,一個比較安全的Elasticsearch的叢集就已經建立完畢了。

kibana的安裝配置

下面開始安裝kibana,友善通過浏覽器通路。

​​下載下傳位址​​

$ wget -c "https://artifacts.elastic.co/downloads/kibana/kibana-7.4.2-linux-x86_64.tar.gz"
$ tar xf /opt/softs/elk7.4/kibana-7.4.2-linux-x86_64.tar.gz 
$ ln -s kibana-7.4.2-linux-x86_64 kibana      

配置kibana:

$ cat kibana/config/kibana.yml |grep -Ev "^$|^#"
server.port: 5601
server.host: "0.0.0.0"
server.name: "mykibana"
elasticsearch.hosts: ["http://localhost:9200"]
kibana.index: ".kibana"
elasticsearch.username: "kibana"  # 這裡使用的是 給kibana開通的連接配接賬号
elasticsearch.password: "UKuHceHWudloJk9NvHlX"
# i18n.locale: "en"
i18n.locale: "zh-CN"
xpack.security.encryptionKey: Hz*9yFFaPejHvCkhT*ddNx%WsBgxVSCQ # 自己随意生成的32位加密key      

通路kibana的IP:5601即可,可以看到登陸界面:

一個使用永不過期的Basic許可的免費License,開啟了基本的Auth認證和叢集間SSL/TLS 認證的Elasticsearch叢集就建立完畢了。

等等,你有沒有想過Kibana的配置檔案中使用着明文的使用者名密碼,這裡隻能通過LInux的權限進行控制了,有沒有更安全的方式呢,有的,就是keystore。

kibana keystore 安全配置

​​參考官網​​

檢視``kibana-keystore`指令幫助:

$ ./bin/kibana-keystore --help
Usage: bin/kibana-keystore [options] [command]

A tool for managing settings stored in the Kibana keystore

Options:
  -V, --version           output the version number
  -h, --help              output usage information

Commands:
  create [options]        Creates a new Kibana keystore
  list [options]          List entries in the keystore
  add [options] <key>     Add a string setting to the keystore
  remove [options] <key>  Remove a setting from      

首先我們建立keystore:

$ bin/kibana-keystore create
Created Kibana keystore in /opt/elk74/kibana-7.4.2-linux-x86_64/data/kibana.keystore # 預設存放位置      

增加配置:

我們要吧kibana.yml 配置檔案中的敏感資訊,比如:​

​elasticsearch.username​

​​ 和 ​

​elasticsearch.password​

​,給隐藏掉,或者直接去掉;

是以這裡我們增加兩個配置:分别是​

​elasticsearch.password​

​​ 和 ​

​elasticsearch.username​

​:

# 檢視add的指令幫助:
$ ./bin/kibana-keystore add --help
Usage: add [options] <key>

Add a string setting to the keystore

Options:
  -f, --force   overwrite existing setting without prompting
  -x, --stdin   read setting value from stdin
  -s, --silent  prevent all logging
  -h, --help    output usage information

# 建立elasticsearch.username這個key:注意名字必須是kibana.yml中的key
$ ./bin/kibana-keystore add elasticsearch.username
Enter value for elasticsearch.username: ******  # 輸入key對應的value,這裡是kibana連接配接es的賬号:kibana

# 建立elasticsearch.password這個key
$ ./bin/kibana-keystore add elasticsearch.password
Enter value for      

好了,我們把kibana.yml配置檔案中的這兩項配置删除即可,然後直接啟動kibana,kibana會自動已用這兩個配置的。

最終的kibana.yml配置如下:

server.port: 5601
server.host: "0.0.0.0"
server.name: "mykibana"
elasticsearch.hosts: ["http://localhost:9200"]
kibana.index: ".kibana"
# i18n.locale: "en"
i18n.locale: "zh-CN"
xpack.security.encryptionKey: Hz*9yFFaPejHvCkhT*ddNx%WsBgxVSCQ # 自己随意生成的32位加密key      

這樣配置檔案中就不會出現敏感資訊了,達到了更高的安全性。

類似的Keystore方式不隻是Kibana支援,ELK的産品都是支援的。

生産環境中整個叢集重新開機和滾動重新開機的正确操作

比如我們後期可能要對整個叢集的重新開機,或者呢,更改一些配置,需要一台一台的重新開機叢集中的每個節點,因為在重新開機的時候ES叢集會自動複制下線節點的shart到其他的節點上,并再平衡node間的shart,會産生很大的IO的,但是這個IO操作是完全沒有必要的。

關閉shard allocation
curl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'
{
  "persistent": {
    "cluster.routing.allocation.enable": "primaries"
  }
}
'
關閉索引和synced flush
curl -X POST "localhost:9200/_flush/synced?pretty"

做完上面兩步的話再關閉整個叢集;待變更完配置後,重新啟動叢集,然後在打開之前關閉的shard allocation:
打開shard allocation
curl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'
{
  "persistent": {
    "cluster.routing.allocation.enable": null
  }
}
'      

  

對于ES叢集node節點輪訓重新開機的操作時,在關閉每個節點之前都先執行上面兩步關閉的操作,然後關閉這個節點,做變更操作,然後在啟動該節點,然後在打開shard allocation,等待ES叢集狀态變為Green後,再進行第二台,然後依次類推。

繼續閱讀