天天看點

Filebeat+Kafka+ELK生産部署(安全加強)

架構說明

  1. 在需要采集日志的伺服器上部署Filebeat服務,它将采集到的日志資料推送到Kafka叢集;
  2. Logstash服務通過input插件讀取Kafka叢集對應主題的資料,期間可以使用filter插件對資料做自定義過濾解析處理,然後通過output插件将資料推送到Elasticsearch叢集中;
  3. 最後使用者通過Kibana服務提供的web界面,對索引資料做彙總,分析,搜尋和展示等功能。
Filebeat+Kafka+ELK生産部署(安全加強)

本文旨在部署安全可靠的生産架構,對ELK做

XPack

安全加強,對Kafka做

SASL

安全加強!

準備工作

主機名 裝置IP 角色 系統版本
es83 192.168.100.83 filebeat,es,logstash,kafka,kibana CentOS 7.6
es86 192.168.100.86 es,logstash,kafka CentOS 7.6
es87 192.168.100.87 es,logstash,kafka CentOS 7.6

本文的ELK全家桶版本為

7.2.0

,Kafka版本為

2.12-2.3.0

環境配置

主要的操作有:關閉selinux安全機制,關閉firewalld防火牆,關閉swap交換記憶體空間,檔案及記憶體限制配置,設定jvm參數,建立普通使用者,準備磁盤存儲目錄等;建議做好伺服器間的免密登陸操作。

auto_elk_env.sh

#!/bin/bash
echo "##### Update /etc/hosts #####"
cat >> /etc/hosts <<EOF
192.168.100.83 es83
192.168.100.86 es86
192.168.100.87 es87
EOF

echo "##### Stop firewalld #####"
systemctl stop firewalld
systemctl disable firewalld

echo "##### Close selinux #####"
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

echo "##### Close swap #####"
swapoff -a

# 提示:修改完該檔案後,需重新登入終端才可生效,可通過ulimit -a檢視。
echo "##### Modify /etc/security/limits.conf #####"
cat > /etc/security/limits.conf <<EOF
* soft nofile 65536
* hard nofile 131072
* soft nproc 65536
* hard nproc 65536
* soft memlock unlimited
* hard memlock unlimited
EOF

echo "##### Modify /etc/sysctl.conf #####"
cat >> /etc/sysctl.conf <<EOF
vm.max_map_count=562144
EOF
sysctl -p

echo "##### Create user(密碼随意) #####"
useradd elkuser
echo 123456 | passwd --stdin elkuser

echo "##### 配置SSH免密通信 #####"
ssh-keygen   # 一路回車即可
ssh-copy-id 192.168.100.83
ssh-copy-id 192.168.100.86
ssh-copy-id 192.168.100.87
           

Elasticsearch叢集部署

Elasticsearch 是一個分布式、RESTful風格的搜尋和資料分析引擎;它實作了用于全文檢索的反向索引,而且為每個資料都編入了索引,搜尋速度非常快;它具有可擴充性和彈性,每秒鐘能處理海量事件,并且它适用于所有資料類型,例如結構化資料、非結構化資料、地理位置等。
筆者在生産環境上,為Elasticsearch配置設定了30G記憶體(

最大不要超過32G

),6塊446G的SSD磁盤,并使用G1的垃圾回收政策,關于硬體配置大家根據實際情況來配置設定使用!
提示:筆者已事先下載下傳好了所有軟體包到伺服器上;本文的三個es節點預設都做主節點和資料節點,當使用xpack加密時,主節點也必須做資料節點,否則加密配置寫入不進es存儲!

在本文中,筆者直接在83節點上完成了es叢集的部署,請仔細閱讀下方的指令!

# 下載下傳方式:wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.2.0-linux-x86_64.tar.gz
echo "##### 解壓Elasticsearch #####"
[[email protected] ~]# cd /home/elkuser/
[[email protected] elkuser]# tar -xvf elasticsearch-7.2.0-linux-x86_64.tar.gz

echo "##### 修改jvm檔案 #####"
[[email protected] elkuser]# cd ./elasticsearch-7.2.0/
[[email protected] elasticsearch-7.2.0]# sed -i -e 's/1g/30g/g' -e '36,38s/^-/#&/g' ./config/jvm.options
[[email protected] elasticsearch-7.2.0]# sed -i -e 'N;38 a -XX:+UseG1GC \n-XX:MaxGCPauseMillis=50' ./config/jvm.options

echo "##### 生成關鍵證書檔案 #####"
[[email protected] elasticsearch-7.2.0]# ./bin/elasticsearch-certutil ca
......
Please enter the desired output file [elastic-stack-ca.p12]: 回車Enter
Enter password for elastic-stack-ca.p12 : 回車Enter

echo "##### 利用關鍵證書生成所有es節點證書檔案 #####"
[[email protected] elasticsearch-7.2.0]# ./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 --ip 192.168.100.83
......
Enter password for CA (elastic-stack-ca.p12) : 回車Enter
Please enter the desired output file [elastic-certificates.p12]: es83.p12
Enter password for es83.p12 : 回車Enter

[[email protected] elasticsearch-7.2.0]# ./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 --ip 192.168.100.86
......
Enter password for CA (elastic-stack-ca.p12) : 回車Enter
Please enter the desired output file [elastic-certificates.p12]: es86.p12
Enter password for es86.p12 : 回車Enter

[[email protected] elasticsearch-7.2.0]# ./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 --ip 192.168.100.87
......
Enter password for CA (elastic-stack-ca.p12) : 回車Enter
Please enter the desired output file [elastic-certificates.p12]: es87.p12
Enter password for es87.p12 : 回車Enter

echo "##### 利用關鍵證書生成後續logstash所需證書 #####"
[[email protected] elasticsearch-7.2.0]# openssl pkcs12 -in elastic-stack-ca.p12 -clcerts -nokeys > root.cer
[[email protected] elasticsearch-7.2.0]# openssl x509 -in root.cer -out root.pem

echo "##### 利用關鍵證書生成後續kibana所需證書 #####"
[[email protected] elasticsearch-7.2.0]# ./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 -name "CN=something,OU=Consulting Team,DC=mydomain,DC=com"
......
Enter password for CA (elastic-stack-ca.p12) : 回車Enter
Please enter the desired output file [CN=something,OU=Consulting Team,DC=mydomain,DC=com.p12]: client.p12
Enter password for client.p12 : 回車Enter

echo "##### 移動所生成的證書檔案到指定目錄下 #####"
[[email protected] elasticsearch-7.2.0]# cp *.p12 ./config/

echo "##### 修改es配置檔案 #####"
[[email protected] elasticsearch-7.2.0]# cat > ./config/elasticsearch.yml <<EOF
cluster.name: chilu_elk
node.name: es83
node.master: true
node.data: true
path.data: /logdata/data1,/logdata/data2,/logdata/data3,/logdata/data4,/logdata/data5,/logdata/data6
bootstrap.memory_lock: true
bootstrap.system_call_filter: false
network.host: 192.168.100.83
http.port: 9200
transport.tcp.port: 9300
discovery.zen.ping.unicast.hosts: ["192.168.100.83:9300","192.168.100.86:9300","192.168.100.87:9300"]
cluster.initial_master_nodes: ["192.168.100.83:9300","192.168.100.86:9300","192.168.100.87:9300"]
node.max_local_storage_nodes: 256
indices.fielddata.cache.size: 50%
http.cors.enabled: true
http.cors.allow-origin: "*"

xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: es83.p12
xpack.security.transport.ssl.truststore.path: elastic-stack-ca.p12

xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: es83.p12
xpack.security.http.ssl.truststore.path: elastic-stack-ca.p12
xpack.security.http.ssl.client_authentication: optional
EOF

echo "##### scp目錄到其他節點上,并修改配置 #####"
[[email protected] elasticsearch-7.2.0]# cd ../
[[email protected] elkuser]# scp -r ./elasticsearch-7.2.0 192.168.100.86:/home/elkuser/
[[email protected] elkuser]# scp -r ./elasticsearch-7.2.0 192.168.100.87:/home/elkuser/
[[email protected] elkuser]# ssh 192.168.100.86 "sed -i -e 's/es83/es86/g' -e '8s/192.168.100.83/192.168.100.86/' /home/elkuser/elasticsearch-7.2.0/config/elasticsearch.yml"
[[email protected] elkuser]# ssh 192.168.100.87 "sed -i -e 's/es83/es87/g' -e '8s/192.168.100.83/192.168.100.87/' /home/elkuser/elasticsearch-7.2.0/config/elasticsearch.yml"

echo "##### 修改各目錄的屬主群組 #####"
[[email protected] elkuser]# chown -R elkuser:elkuser /logdata ./elasticsearch-7.2.0
[[email protected] elkuser]# ssh 192.168.100.86 "chown -R elkuser:elkuser /logdata /home/elkuser/elasticsearch-7.2.0"
[[email protected] elkuser]# ssh 192.168.100.87 "chown -R elkuser:elkuser /logdata /home/elkuser/elasticsearch-7.2.0"

echo "##### 切換普通使用者,背景運作elasticsearch服務 #####"
[[email protected] elasticsearch-7.2.0]# su elkuser
[[email protected] elasticsearch-7.2.0]$ ./bin/elasticsearch -d
[[email protected] elasticsearch-7.2.0]$ ssh [email protected] "/home/elkuser/elasticsearch-7.2.0/bin/elasticsearch -d"
[[email protected] elasticsearch-7.2.0]$ ssh [email protected] "/home/elkuser/elasticsearch-7.2.0/bin/elasticsearch -d"

echo "##### 自動生成使用者密碼(記得儲存好使用者密碼) #####"
[[email protected] elasticsearch-7.2.0]$ echo y | ./bin/elasticsearch-setup-passwords auto | tee elk_pwd.log
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
The passwords will be randomly generated and printed to the console.

Changed password for user apm_system
PASSWORD apm_system = HojN4w88Nwgl51Oe7o12

Changed password for user kibana
PASSWORD kibana = JPYDvJYn2CDmls5gIlNG

Changed password for user logstash_system
PASSWORD logstash_system = kXxmVCX34PGpUluBXABX

Changed password for user beats_system
PASSWORD beats_system = rY90aBHjAdidQPwgX87u

Changed password for user remote_monitoring_user
PASSWORD remote_monitoring_user = 0VxaGROqo255y60P1kBV

Changed password for user elastic
PASSWORD elastic = NvOBRGpUE3DoaSbYaUp3

echo "##### 測試es加密,檢視叢集狀态是否為green #####"
[[email protected] elasticsearch-7.2.0]$ curl --tlsv1 -XGET "https://192.168.100.83:9200/_cluster/health?pretty" --user elastic:NvOBRGpUE3DoaSbYaUp3 -k
           

Kafka叢集部署

Kafka 是最初由Linkedin公司開發,是一個分布式、分區的、多副本的、多訂閱者,基于zookeeper協調的分布式消息系統;它具有高吞吐量、低延遲、可擴充性、持久性、可靠性、容錯性和高并發等特點,可以處理幾十萬條消息,延遲隻有幾毫秒,叢集式部署支援熱擴充,消息可被持久化到本地磁盤,防止資料丢失,而且支援數千個用戶端同時讀寫。

在本文的架構中,kafka是用作緩存消息隊列,用來實時接收日志和發送日志到logstash,實作解耦和流量削峰,解決logstash消費能力跟不上導緻的資料丢失問題;筆者采用的是kafka内置的zookeeper,也是以叢集方式部署,無需再單獨搭建zookeeper叢集服務。

注意:kafka的叢集配置資訊,狀态維護是存儲在zookeeper這個程序裡的,是以kafka在啟動前需要先配置啟動zookeeper!

筆者為zookeeper服務配置設定了4G記憶體,為kafka服務配置設定了31G記憶體和5塊SSD磁盤,關于硬體配置大家根據實際情況來配置設定使用!
# 下載下傳方式:wget https://archive.apache.org/dist/kafka/2.3.0/kafka_2.12-2.3.0.tgz
echo "##### 解壓Kafka #####"
[[email protected] ~]# cd /opt/
[[email protected] opt]# tar -xvf ./kafka_2.12-2.3.0.tgz

echo "##### 修改zookeeper配置檔案 #####"
[[email protected] opt]# cd ./kafka_2.12-2.3.0/
[[email protected] kafka_2.12-2.3.0]# cat > ./config/zookeeper.properties <<EOF
dataDir=/opt/zookeeper
clientPort=2181
maxClientCnxns=0
tickTime=2000
initLimit=10
syncLimit=5
server.1=192.168.100.83:2888:3888
server.2=192.168.100.86:2888:3888
server.3=192.168.100.87:2888:3888

authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000
4lw.commands.whitelist=
EOF

echo "##### 建立zookeeper資料目錄和對應的myid檔案 #####"
[[email protected] kafka_2.12-2.3.0]# mkdir /opt/zookeeper
[[email protected] kafka_2.12-2.3.0]# echo 1 > /opt/zookeeper/myid

echo "##### 修改kafka配置檔案 #####"
[[email protected] kafka_2.12-2.3.0]# cat > ./config/server.properties <<EOF
broker.id=83
listeners=SASL_PLAINTEXT://192.168.100.83:9092
advertised.listeners=SASL_PLAINTEXT://192.168.100.83:9092
num.network.threads=5
num.io.threads=8
socket.send.buffer.bytes=1024000
socket.receive.buffer.bytes=1024000
socket.request.max.bytes=1048576000
log.dirs=/logdata/kfkdata1,/logdata/kfkdata2,/logdata/kfkdata3,/logdata/kfkdata4,/logdata/kfkdata5
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=72
log.segment.delete.delay.ms=1000
log.cleaner.enable=true
log.cleanup.policy=delete
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.100.83:2181,192.168.100.86:2181,192.168.100.87:2181
zookeeper.connection.timeout.ms=60000
group.initial.rebalance.delay.ms=0
delete.topic.enable=true

security.inter.broker.protocol=SASL_PLAINTEXT  
sasl.enabled.mechanisms=PLAIN  
sasl.mechanism.inter.broker.protocol=PLAIN  
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=true
super.users=User:admin;User:kafka
EOF

echo "##### 建立zk和kafka的sasl jaas檔案 #####"
[[email protected] kafka_2.12-2.3.0]# cat > ./config/zk_server_jaas.conf <<EOF
Server {
    org.apache.kafka.common.security.plain.PlainLoginModule required 
    username="admin" 
    password="[email protected]" 
    user_kafka="[email protected]" 
    user_producer="[email protected]";
};
EOF

[[email protected] kafka_2.12-2.3.0]# cat > ./config/kafka_server_jaas.conf <<EOF
KafkaServer {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="admin"
  password="[email protected]"
  user_admin="[email protected]"
  user_producer="[email protected]"
  user_consumer="[email protected]";
};

KafkaClient {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="kafka"
  password="[email protected]";
};

Client {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="kafka"
  password="[email protected]";
};
EOF

echo "##### 修改zk和kafka的啟動檔案(增加SASL的環境配置) #####"
[[email protected] kafka_2.12-2.3.0]# sed -i -e 's/512M/4G/g' -e 's#Xms4G#Xms4G -Djava.security.auth.login.config=/opt/kafka_2.12-2.3.0/config/zk_server_jaas.conf#' ./bin/zookeeper-server-start.sh
[[email protected] kafka_2.12-2.3.0]# sed -i -e 's/1G/31G/g' -e 's#Xms31G#Xms31G -Djava.security.auth.login.config=/opt/kafka_2.12-2.3.0/config/kafka_server_jaas.conf#' ./bin/kafka-server-start.sh

echo "##### 将相關目錄複制到其他兩台節點上,并進行修改 #####"
[[email protected] kafka_2.12-2.3.0]# cd ../
[[email protected] opt]# scp -r ./zookeeper ./kafka_2.12-2.3.0 192.168.100.86:/opt/
[[email protected] opt]# scp -r ./zookeeper ./kafka_2.12-2.3.0 192.168.100.87:/opt/
[[email protected] opt]# ssh 192.168.100.86 "echo 2 > /opt/zookeeper/myid ; sed -i '1,3s/83/86/' /opt/kafka_2.12-2.3.0/config/server.properties"
[[email protected] opt]# ssh 192.168.100.87 "echo 3 > /opt/zookeeper/myid ; sed -i '1,3s/83/87/' /opt/kafka_2.12-2.3.0/config/server.properties"

echo "##### 背景啟動zookeeper服務 #####"
[[email protected] opt]# cd ./kafka_2.12-2.3.0/
[[email protected] kafka_2.12-2.3.0]# ./bin/zookeeper-server-start.sh -daemon ./config/zookeeper.properties
[[email protected] kafka_2.12-2.3.0]# ssh 192.168.100.86 "/opt/kafka_2.12-2.3.0/bin/zookeeper-server-start.sh -daemon /opt/kafka_2.12-2.3.0/config/zookeeper.properties"
[[email protected] kafka_2.12-2.3.0]# ssh 192.168.100.87 "/opt/kafka_2.12-2.3.0/bin/zookeeper-server-start.sh -daemon /opt/kafka_2.12-2.3.0/config/zookeeper.properties"

echo "##### 背景啟動kafka服務 #####"
[[email protected] kafka_2.12-2.3.0]# ./bin/kafka-server-start.sh -daemon ./config/server.properties
[[email protected] kafka_2.12-2.3.0]# ssh 192.168.100.86 "/opt/kafka_2.12-2.3.0/bin/kafka-server-start.sh -daemon /opt/kafka_2.12-2.3.0/config/server.properties"
[[email protected] kafka_2.12-2.3.0]# ssh 192.168.100.87 "/opt/kafka_2.12-2.3.0/bin/kafka-server-start.sh -daemon /opt/kafka_2.12-2.3.0/config/server.properties"
           

當zk和kafka服務都啟動後,可以先檢查下相關端口狀态是否正常

[[email protected] kafka_2.12-2.3.0]# netstat -antlp | grep -E "2888|3888|2181|9092"
           
當叢集服務一切正常後,即可在其中一台kafka節點上配置ACL通路控制權限,對生産者producer和消費者consumer得主題topic群組group設定通路權限,可以限制隻允許指定的機器通路。
提示:下面的

mykafka

是通過

/etc/hosts

自定義一個IP的域名,例如:

192.168.100.83 mykafka

;如果寫成localhost可能沒有權限,執行指令後會報NoAuth;如果寫成IP位址會報CONNECT !!!
echo "##### 編寫配置ACL通路權限腳本 #####"
[[email protected] kafka_2.12-2.3.0]# cat > ./kfkacls.sh <<EOF
#!/bin/bash
/opt/kafka_2.12-2.3.0/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=mykafka:2181 --add --allow-principal User:producer --allow-host 0.0.0.0 --operation Read --operation Write --topic elk
/opt/kafka_2.12-2.3.0/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=mykafka:2181 --add --allow-principal User:producer --topic elk --producer --group chilu
/opt/kafka_2.12-2.3.0/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=mykafka:2181 --add --allow-principal User:consumer --allow-host 0.0.0.0 --operation Read --operation Write --topic elk
/opt/kafka_2.12-2.3.0/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=mykafka:2181 --add --allow-principal User:consumer --topic elk --consumer --group chilu
EOF

echo "##### 執行腳本 #####"
[[email protected] kafka_2.12-2.3.0]# bash ./kfkacls.sh

echo "##### 檢視ACL權限清單 #####"
[[email protected] kafka_2.12-2.3.0]# ./bin/kafka-acls.sh --authorizer-properties zookeeper.connect=mykafka:2181 --list

# 提示:下面是互動式的指令配置
echo "##### 增加ACL通路權限 #####"
[[email protected] kafka_2.12-2.3.0]# ./bin/zookeeper-shell.sh mykafka:2181
Welcome to ZooKeeper!
JLine support is disabled

WATCHER::

WatchedEvent state:SyncConnected type:None path:null

此時可以直接在這個控制台輸入指令
如ls / 檢視ZK的目錄 

檢查預設權限
getAcl /

預設所有人可以檢視
添權重限指令為:(僅添加kafka主機的IP)
setAcl / ip:192.168.100.83:cdrwa,ip:192.168.100.86:cdrwa,ip:192.168.100.87:cdrwa
setAcl /kafka-acl ip:192.168.100.83:cdrwa,ip:192.168.100.86:cdrwa,ip:192.168.100.87:cdrwa

檢查是否生效
getAcl /   
輸出:
'ip,'192.168.100.83
: cdrwa
'ip,'192.168.100.86
: cdrwa
'ip,'192.168.100.87
: cdrwa

退出
quit
           

Logstash服務部署

Logstash 是免費且開放的伺服器端資料處理管道,采用的是可插拔架構,擁有200多個插件,支援各種輸入和輸出選擇,能夠實時解析和轉換資料,具有可伸縮性、彈性和靈活性;但是它比較消耗資源,運作時占用較高的CPU和記憶體,如果缺少消息隊列緩存,會有資料丢失的隐患,是以小夥伴們要結合自身情況來使用!
筆者在生産環境上,也為Logstash配置設定了30G記憶體,關于硬體配置大家根據實際情況來配置設定使用!
# 下載下傳方式:wget https://artifacts.elastic.co/downloads/logstash/logstash-7.2.0.tar.gz
echo "##### 解壓Logstash #####"
[[email protected] ~]# cd /home/elkuser/
[[email protected] elkuser]# tar -xvf ./logstash-7.2.0.tar.gz

echo "##### 修改啟動記憶體 #####"
[[email protected] elkuser]# cd ./logstash-7.2.0/
[[email protected] logstash-7.2.0]# sed -i -e 's/1g/30g/g' ./config/jvm.options

echo "##### 複制相關所需證書到logstash目錄下 #####"
[[email protected] elkuser]# cd ./logstash-7.2.0/config/
[[email protected] config]# cp /home/elkuser/elasticsearch-7.2.0/root.pem ./

echo "##### 修改logstash配置檔案 #####"
[[email protected] config]# cat > ./logstash.yml <<EOF
http.host: "192.168.100.83"
node.name: "logstash83"
xpack.monitoring.elasticsearch.hosts: [ "https://192.168.100.83:9200" ]
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: "elastic"
xpack.monitoring.elasticsearch.password: "NvOBRGpUE3DoaSbYaUp3"
xpack.monitoring.elasticsearch.ssl.certificate_authority: config/root.pem
xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
xpack.monitoring.collection.interval: 30s
xpack.monitoring.collection.pipeline.details.enabled: true
EOF

# 提示:配置的使用者名和密碼要跟kafka配置的一緻!
echo "##### 配置接入kafka的用戶端檔案 #####"
[[email protected] config]# cat > ./kafka-client-jaas.conf <<EOF
KafkaClient {
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="consumer"
    password="[email protected]";
};
EOF

echo "##### input和ouput的配置示例 #####"
[[email protected] config]# cat > ./test.cfg <<EOF
input {
    kafka {
       bootstrap_servers => "192.168.100.83:9092,192.168.100.86:9092,192.168.100.87:9092"
       client_id => "chilu83"
       auto_offset_reset => "latest"
       topics => "elk"
       group_id => "chilu"
       security_protocol => "SASL_PLAINTEXT"
       sasl_mechanism => "PLAIN"
       jaas_path => "/home/elkuser/logstash-7.2.0/config/kafka-client-jaas.conf"
    }
}

filter {
}

output {
    elasticsearch {
        hosts => ["192.168.4.1:9200","192.168.4.2:9200","192.168.4.3:9200"]
        user => "elastic"
        password => "NvOBRGpUE3DoaSbYaUp3"
        ssl => true
        cacert => "/home/elkuser/logstash-7.2.0/config/root.pem"
        index => "chilu_elk%{+YYYY.MM.dd}"
    }
}
EOF

echo "##### 啟動logstash服務 #####"
[[email protected] config]# ../bin/logstash -r -f ./test.cfg
           

Kibana服務部署

Kibana 是一個開源的分析和可視化平台,可以為Logstash和ElasticSearch提供的日志資料進行高效的搜尋、可視化彙總和多元度分析,并且與Elasticsearch索引中的資料進行互動;它基于浏覽器的界面操作可以快速建立動态儀表闆,實時監控ElasticSearch的資料狀态與更改等。
筆者在生産環境上,為Kibana配置設定了8G記憶體,關于硬體配置大家根據實際情況來配置設定使用!
# 下載下傳方式:wget https://artifacts.elastic.co/downloads/kibana/kibana-7.2.0-linux-x86_64.tar.gz
echo "##### 解壓Kibana #####"
[[email protected] ~]# cd /home/elkuser/
[[email protected] elkuser]# tar -xvf kibana-7.2.0-linux-x86_64.tar.gz

echo "##### 修改啟動記憶體 #####"
[[email protected] elkuser]# cd ./kibana-7.2.0-linux-x86_64/
[[email protected] kibana-7.2.0-linux-x86_64]# sed -i 's/warnings/warnings --max_old_space_size=8096/' ./bin/kibana

echo "##### 複制相關所需證書到kibana目錄下 #####"
[[email protected] kibana-7.2.0-linux-x86_64]# cd ./config/
[[email protected] config]# cp /home/elkuser/elasticsearch-7.2.0/client.p12 ./

echo "##### 利用client.p12證書生成其他所需證書 #####"
[[email protected] config]# openssl pkcs12 -in client.p12 -nocerts -nodes > client.key
Enter Import Password: 回車Enter
MAC verified OK

[[email protected] config]# openssl pkcs12 -in client.p12 -clcerts -nokeys > client.cer
Enter Import Password: 回車Enter
MAC verified OK

[[email protected] config]# openssl pkcs12 -in client.p12 -cacerts -nokeys -chain > client-ca.cer
Enter Import Password: 回車Enter
MAC verified OK

echo "##### 更新kibana的web界面為https通路 #####"
[[email protected] config]# cd ../
[[email protected] kibana-7.2.0-linux-x86_64]# openssl req -newkey rsa:2048 -nodes -keyout server.key -x509 -days 3650 -out server.crt -subj "/C=CN/ST=guangzhou/L=rljie/O=chilu/OU=linux/"

echo "##### 修改kibana的配置檔案 #####"
[[email protected] kibana-7.2.0-linux-x86_64]# cat > ./config/kibana.yml <<EOF
server.name: kibana
server.host: "192.168.100.83"
elasticsearch.hosts: [ "https://192.168.100.83:9200" ]
xpack.monitoring.ui.container.elasticsearch.enabled: true
elasticsearch.username: "elastic"
elasticsearch.password: "NvOBRGpUE3DoaSbYaUp3"
xpack.security.enabled: true
elasticsearch.ssl.certificateAuthorities: config/client-ca.cer
elasticsearch.ssl.verificationMode: certificate
xpack.security.encryptionKey: "4297f44b13955235245b2497399d7a93"
xpack.reporting.encryptionKey: "4297f44b13955235245b2497399d7a93"
server.ssl.enabled: true
server.ssl.certificate: server.crt
server.ssl.key: server.key
EOF

echo "##### nohup背景啟動kibana服務(自行選擇背景方式) #####"
[[email protected] kibana-7.2.0-linux-x86_64]# nohup ./bin/kibana --allow-root &
           

完成以上操作後,可使用浏覽器通路kibana位址

https://192.168.100.83

,輸入

elastic

使用者密碼即可!

Filebeat+Kafka+ELK生産部署(安全加強)

curl 示例

curl --tlsv1 -XGET 'https://192.168.100.83:9200/_cluster/health?pretty' --cacert '/home/elkuser/elasticsearch-7.2.0/root.pem' --user elastic:NvOBRGpUE3DoaSbYaUp3
           

Filebeat服務部署

Filebeat 是一個用于轉發和集中日志資料的輕量級采集器,基于go語言開發,性能穩定,配置簡單,占用資源很少;它作為agent安裝在伺服器上,可以監控你指定的日志檔案或位置,收集日志事件,并将其轉發到配置的輸出;主要通過探測器prospector和收集器harvester元件完成工作。
# 下載下傳方式:wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.2.0-linux-x86_64.tar.gz
echo "##### 解壓Filebeat #####"
[[email protected] ~]# cd /home/elkuser/
[[email protected] elkuser]# tar -xvf filebeat-7.2.0-linux-x86_64.tar.gz

echo "##### 修改filebeat配置檔案 #####"
[[email protected] elkuser]# cd ./filebeat-7.2.0-linux-x86_64/
[[email protected] filebeat-7.2.0-linux-x86_64]# cat > ./filebeat.yml <<\EOF
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/access.log
  close_timeout: 1h
  clean_inactive: 3h
  ignore_older: 2h

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: true

setup.template.settings:
  index.number_of_shards: 3

setup.kibana:

output.kafka:
  hosts: ["192.168.100.83:9092","192.168.100.86:9092","192.168.100.87:9092"]
  topic: elk
  required_acks: 1
  username: "producer"
  password: "[email protected]"
EOF

echo "##### nohup背景啟動filebeat服務 #####"
[[email protected] filebeat-7.2.0-linux-x86_64]# nohup ./filebeat -e -c filebeat.yml &
           

本次架構的講解操作到此為止,大家在搭建部署時要仔細了!kuber

繼續閱讀