1. 準備
1.1 鏡像準備
- 下載下傳kibana鏡像
此鏡像比較大,如果下載下傳速度過慢,可以配置阿裡雲docker鏡像加速docker pull logstash:7.9.1
- 檢視下載下傳的logstash鏡像
docker images |grep logstash
從零建構ELK日志分析平台:Logstash7.9安裝&解析kafka日志到控制台
2. logstash安裝
- 2.1 建立目錄
目錄 用途 /root/docker-compose/logstash 存放logstash的docker-compose.yml檔案 /root/docker-compose/logstash/config 存放logstash配置檔案 /root/docker-compose/logstash/pipeline 存放管道配置檔案 [[email protected] docker-compose]# mkdir -vp /root/docker-compose/logstash/{config,pipeline} mkdir: 已建立目錄 "/root/docker-compose/logstash" mkdir: 已建立目錄 "/root/docker-compose/logstash/config" mkdir: 已建立目錄 "/root/docker-compose/logstash/pipeline"
- 2.2 建立檔案
檔案 用途 /root/docker-compose/logstash/docker-compose.yml logstash容器編排檔案 /root/docker-compose/logstash/config/pipelines.yml 配置一個logstash支援多管道 /root/docker-compose/logstash/pipeline/log-kafka-dev.conf 開發環境管道,處理kafka中log_kafka_dev主題中的消息 - 2.2.1 docker-compose.yml檔案内容
vim /root/docker-compose/logstash/docker-compose.yml
version: '2.2' services: logstash: container_name: logstash image: logstash:7.9.1 restart: always environment: NODE_NAME: ls01 # 配置檔案自動重新加載 CONFIG_RELOAD_AUTOMATIC: "true" # 開啟監控 XPACK_MONITORING_ENABLED: "true" XPACK_MONITORING_ELASTICSEARCH_HOSTS: "http://192.168.1.14:9200" volumes: - ./config/pipelines.yml:/usr/share/logstash/config/pipelines.yml - ./pipeline:/usr/share/logstash/pipeline
- pipelines.yml檔案内容
vim /root/docker-compose/logstash/config/pipelines.yml
- pipeline.id: log-kafka-dev queue.type: memory path.config: "/usr/share/logstash/pipeline/log-kafka-dev.conf" pipeline.workers: 1 pipeline.batch.size: 1000 # - pipeline.id: log-kafka-test # queue.type: persisted # path.config: "/usr/share/logstash/pipeline/log-kafka-test.conf" # pipeline.workers: 2 # pipeline.batch.size: 3000 # - pipeline.id: log-kafka-prod # queue.type: persisted # path.config: "/usr/share/logstash/pipeline/log-kafka-prod.conf" # pipeline.workers: 8 # pipeline.batch.size: 3000
- log-kafka-dev.conf檔案内容
vim /root/docker-compose/logstash/pipeline/log-kafka-dev.conf
input{ kafka{ bootstrap_servers => "192.168.1.14:9092" #kafka位址 auto_offset_reset => "earliest" #消息讀取位置 topics => ["log_kafka_dev"] #kafka中topic名稱,記得建立該topic group_id => "logstash-7.9.1" #預設為“logstash” codec => "json" #與Shipper端output配置項一緻 consumer_threads => 3 #消費的線程數 max_poll_records => "2000" decorate_events => true #在輸出消息的時候回輸出自身的資訊,包括:消費消息的大小、topic來源以及consumer的group資訊。 } } filter { #添加字段,kafka分區,偏移,時間戳 mutate{ add_field =>{ kafkaPartition => "%{[@metadata][kafka][partition]}" kafkaOffset => "%{[@metadata][kafka][offset]}" kafkaTime => "%{[@metadata][kafka][timestamp]}" } } # 将分區,偏移改為數值型(此處integer包含java中long類型) mutate{ convert => ["kafkaPartition", "integer"] convert => ["kafkaOffset", "integer"] } } output { # 将日志輸出到控制台 stdout { codec => rubydebug } }
- 2.2.1 docker-compose.yml檔案内容
- 2.3 啟動logstash
# 進入docker-compose.yml檔案所在目錄
[[email protected] ~]# cd /root/docker-compose/logstash
# 啟動logstash
[[email protected] logstash]# docker-compose up -d
Creating network "logstash_default" with the default driver
Creating logstash ... done
# 通過docker ps檢視啟動情況
[[email protected] logstash]# docker ps |grep logstash
ddce8cdc35aa logstash:7.9.1 "/usr/local/bin/do..." 32 seconds ago Up 30 seconds 5044/tcp, 9600/tcp logstash
3. logstash日志解析
3.1 啟動logback-kafka-springboot項目
logback-kafka-springboot項目介紹
運作
LogbackKafkaSpringbootApplication
類的main方法,将項目啟動,用來模拟日志寫入kafka的
log_kafka_dev
主題.
3.2 通過日志檢視日志解析
docker logs -f logstash
在浏覽器中輸入
http://localhost:8080/log/1
進行新日志生成,檢視解析情況
從上圖可見,我們生成日志後,會秒級響應到控制台上.