1. 准备
1.1 镜像准备
- 下载kibana镜像
此镜像比较大,如果下载速度过慢,可以配置阿里云docker镜像加速docker pull logstash:7.9.1
- 查看下载的logstash镜像
docker images |grep logstash
从零构建ELK日志分析平台:Logstash7.9安装&解析kafka日志到控制台
2. logstash安装
- 2.1 创建目录
目录 用途 /root/docker-compose/logstash 存放logstash的docker-compose.yml文件 /root/docker-compose/logstash/config 存放logstash配置文件 /root/docker-compose/logstash/pipeline 存放管道配置文件 [[email protected] docker-compose]# mkdir -vp /root/docker-compose/logstash/{config,pipeline} mkdir: 已创建目录 "/root/docker-compose/logstash" mkdir: 已创建目录 "/root/docker-compose/logstash/config" mkdir: 已创建目录 "/root/docker-compose/logstash/pipeline"
- 2.2 创建文件
文件 用途 /root/docker-compose/logstash/docker-compose.yml logstash容器编排文件 /root/docker-compose/logstash/config/pipelines.yml 配置一个logstash支持多管道 /root/docker-compose/logstash/pipeline/log-kafka-dev.conf 开发环境管道,处理kafka中log_kafka_dev主题中的消息 - 2.2.1 docker-compose.yml文件内容
vim /root/docker-compose/logstash/docker-compose.yml
version: '2.2' services: logstash: container_name: logstash image: logstash:7.9.1 restart: always environment: NODE_NAME: ls01 # 配置文件自动重新加载 CONFIG_RELOAD_AUTOMATIC: "true" # 开启监控 XPACK_MONITORING_ENABLED: "true" XPACK_MONITORING_ELASTICSEARCH_HOSTS: "http://192.168.1.14:9200" volumes: - ./config/pipelines.yml:/usr/share/logstash/config/pipelines.yml - ./pipeline:/usr/share/logstash/pipeline
- pipelines.yml文件内容
vim /root/docker-compose/logstash/config/pipelines.yml
- pipeline.id: log-kafka-dev queue.type: memory path.config: "/usr/share/logstash/pipeline/log-kafka-dev.conf" pipeline.workers: 1 pipeline.batch.size: 1000 # - pipeline.id: log-kafka-test # queue.type: persisted # path.config: "/usr/share/logstash/pipeline/log-kafka-test.conf" # pipeline.workers: 2 # pipeline.batch.size: 3000 # - pipeline.id: log-kafka-prod # queue.type: persisted # path.config: "/usr/share/logstash/pipeline/log-kafka-prod.conf" # pipeline.workers: 8 # pipeline.batch.size: 3000
- log-kafka-dev.conf文件内容
vim /root/docker-compose/logstash/pipeline/log-kafka-dev.conf
input{ kafka{ bootstrap_servers => "192.168.1.14:9092" #kafka地址 auto_offset_reset => "earliest" #消息读取位置 topics => ["log_kafka_dev"] #kafka中topic名称,记得创建该topic group_id => "logstash-7.9.1" #默认为“logstash” codec => "json" #与Shipper端output配置项一致 consumer_threads => 3 #消费的线程数 max_poll_records => "2000" decorate_events => true #在输出消息的时候回输出自身的信息,包括:消费消息的大小、topic来源以及consumer的group信息。 } } filter { #添加字段,kafka分区,偏移,时间戳 mutate{ add_field =>{ kafkaPartition => "%{[@metadata][kafka][partition]}" kafkaOffset => "%{[@metadata][kafka][offset]}" kafkaTime => "%{[@metadata][kafka][timestamp]}" } } # 将分区,偏移改为数值型(此处integer包含java中long类型) mutate{ convert => ["kafkaPartition", "integer"] convert => ["kafkaOffset", "integer"] } } output { # 将日志输出到控制台 stdout { codec => rubydebug } }
- 2.2.1 docker-compose.yml文件内容
- 2.3 启动logstash
# 进入docker-compose.yml文件所在目录
[[email protected] ~]# cd /root/docker-compose/logstash
# 启动logstash
[[email protected] logstash]# docker-compose up -d
Creating network "logstash_default" with the default driver
Creating logstash ... done
# 通过docker ps查看启动情况
[[email protected] logstash]# docker ps |grep logstash
ddce8cdc35aa logstash:7.9.1 "/usr/local/bin/do..." 32 seconds ago Up 30 seconds 5044/tcp, 9600/tcp logstash
3. logstash日志解析
3.1 启动logback-kafka-springboot项目
logback-kafka-springboot项目介绍
运行
LogbackKafkaSpringbootApplication
类的main方法,将项目启动,用来模拟日志写入kafka的
log_kafka_dev
主题.
3.2 通过日志查看日志解析
docker logs -f logstash
在浏览器中输入
http://localhost:8080/log/1
进行新日志生成,查看解析情况
从上图可见,我们生成日志后,会秒级响应到控制台上.