Docker-compose部署ELK
- 基礎環境安裝
-
- docker-compose安裝
- git安裝
- 指令補全安裝
-
- 系統指令補全
- docker-compose指令補全
- dock-compose部署安裝ELK
-
- 拉取github 項目
- docker-compose 安裝
- docker-compose 啟動(建立鏡像及docker network)
- docker-compose 停止(删除鏡像及docker network)
- docker-compose 停止(保留鏡像及docker network)
- docker-compose 啟動(從stop狀态下重新啟動)
- 啟動後可能出現的問題
- Logstsh端口測試
基礎環境安裝
docker-compose
git 安裝
指令補全安裝
docker-compose安裝
curl -L https://github.com/docker/compose/releases/download/1.28.2/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
建議放置在 /usr/local/bin/ 此目錄為本地第三方安裝 區分 /usr/bin/ 目錄的系統預設安裝位置
git安裝
yum install git -y
指令補全安裝
系統指令補全
自動補齊需要依賴工具 bash-complete,如果沒有,則需要手動安裝,指令如下:
yum -y install bash-completion
安裝成功後,得到檔案為 /usr/share/bash-completion/bash_completion ,如果沒有這個檔案,則說明系統上沒有安裝這個工具。
docker-compose指令補全
指令如下:
curl -L https://raw.githubusercontent.com/docker/compose/$(docker-compose version --short)/contrib/completion/bash/docker-compose -o /etc/bash_completion.d/docker-compose
如出現 0curl: (7) Failed connect to raw.githubusercontent.com:443; Connection refused
證明所在地的域名已被污染,解決辦法在hosts中加入raw.githubusercontent.com的真實位址進行本地解析
1、登陸 https://www.ipaddress.com/ 解析出真實位址

2、修改本地hosts進行本地解析
vim /etc/hosts
加入本地解析
199.232.28.133 raw.githubusercontent.com
dock-compose部署安裝ELK
拉取github 項目
git clone https://github.com/deviantony/docker-elk.git
拉取完成後如下
docker-compose 安裝
修改
elasticsearch/config/elasticsearch.yml
## Default Elasticsearch configuration from Elasticsearch base image.
## https://github.com/elastic/elasticsearch/blob/master/distribution/docker/src/docker/config/elasticsearch.yml
#
cluster.name: "docker-cluster"
network.host: 0.0.0.0
node.name: node-1
node.master: true
http.cors.enabled: true
http.cors.allow-origin: "*"
## X-Pack settings
## see https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-xpack.html
#
xpack.license.self_generated.type: trial
xpack.security.enabled: true
xpack.monitoring.collection.enabled: true
kibana/config/kibana.yml
## Default Kibana configuration from Kibana base image.
## https://github.com/elastic/kibana/blob/master/src/dev/build/tasks/os_packages/docker_generator/templates/kibana_yml.template.ts
#
server.name: kibana
server.host: 0.0.0.0
elasticsearch.hosts: [ "http://elasticsearch_IP:9200" ]
xpack.monitoring.ui.container.elasticsearch.enabled: true
## X-Pack security credentials
#
elasticsearch.username: elastic
elasticsearch.password: changeme
logstash/config/logstash.yml
## Default Logstash configuration from Logstash base image.
## https://github.com/elastic/logstash/blob/master/docker/data/logstash/config/logstash-full.yml
#
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch_IP:9200" ]
## X-Pack security credentials
#
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: changeme
logstash/pipeline/logstash.conf
- 此處有坑,經測試:在 logstash.conf 檔案中input時引用tcp和udp端口并tag後,在output時tcp上會錯誤收到UDP中的日志。不知道是elk本身有BUG還是我哪裡沒做對。
input {
beats {
port => 5044
}
tcp {
port => 5000
type =>"tcp"
}
udp {
port => 5140
type =>"udp"
}
}
## Add your filters / logstash plugins configuration here
output {
if [type] == "tcp"{
elasticsearch {
hosts => "IP:9200"
user => "xxx"
password => "xxx"
ecs_compatibility => disabled
index => "syslog-%{+YYYY.MM.dd}"
}
}
if [type] == "udp"{
elasticsearch {
hosts => "IP:9200"
user => "xxx"
password => "xxx"
ecs_compatibility => disabled
index => "udp_syslog-%{+YYYY.MM.dd}"
}
}
}
以上情況下,當5000端口的TCP資料過來時index syslog 中可以收到 index udp_syslog 中不會收到,當5140的UDP資料過來時 index syslog 和 udp_syslog 都會收到
解決辦法:
在 logstash/pipeline/ 目錄中 另外建立兩份 conf檔案,分别使用TCP和UDP的端口
docker-compose 啟動(建立鏡像及docker network)
[[email protected] docker-elk]# docker-compose up -d
Building with native build. Learn about native build in Compose here: https://docs.docker.com/go/compose-native-build/
Creating docker-elk_elasticsearch_1 ... done
Creating docker-elk_logstash_1 ... done
Creating docker-elk_kibana_1 ... done
docker-compose 停止(删除鏡像及docker network)
[[email protected] docker-elk]# docker-compose down
Stopping docker-elk_kibana_1 ... done
Stopping docker-elk_logstash_1 ... done
Stopping docker-elk_elasticsearch_1 ... done
Removing docker-elk_kibana_1 ... done
Removing docker-elk_logstash_1 ... done
Removing docker-elk_elasticsearch_1 ... done
Removing network docker-elk_elk
docker-compose 停止(保留鏡像及docker network)
[[email protected] docker-elk]# docker-compose stop
Stopping docker-elk_logstash_1 ... done
Stopping docker-elk_kibana_1 ... done
Stopping docker-elk_elasticsearch_1 ... done
docker-compose 啟動(從stop狀态下重新啟動)
[[email protected] docker-elk]# docker-compose start
Starting elasticsearch ... done
Starting logstash ... done
Starting kibana ... done
up down stop start 後均可跟鏡像,單獨執行
啟動後可能出現的問題
Kibana server is not ready yet 錯誤
一般由ElasticSearch索引問題導緻Kibana 提示該未準備好的錯誤
解決辦法:
- 檢視docker-compose.yml時發現 在啟動時 建立了 volumes 并且 E L K 中的bind都綁定進了這個卷,是以考慮緩存引起的問題。docker volume rm 删除該卷後重新 up 一次,問題解決。
- 在測試過程中無意又重制了一次這問題,網上查找其它解決辦法時,大佬們普遍認為是ElasticSearch索引問題導緻的,使用以下方法也可解決問題,比較粗暴的删volumes更好
curl -u elastic:changeme 'localhost:9200/_cat/indices?v' //注意使用xpack插件時要帶帳号密碼通路
删除kibana相關的索引後 成功加載出管理頁面
Logstsh端口測試
Logstash conf
logstash/pipeline/logstash_tcp.conf //5000的TCP端口
input {
tcp {
port => 5000
type =>"tcp"
}
}
## Add your filters / logstash plugins configuration here
output {
if [type] == "tcp"{
elasticsearch {
hosts => "192.168.6.151:9200"
user => "elastic"
password => "changeme"
ecs_compatibility => disabled
index => "syslog-%{+YYYY.MM.dd}"
}
}
}
logstash/pipeline/logstash_udp.conf //5140的UDP端口
input {
udp {
port => 5140
type =>"udp"
}
}
## Add your filters / logstash plugins configuration here
output {
if [type] == "udp"{
elasticsearch {
hosts => "192.168.6.151:9200"
user => "elastic"
password => "changeme"
ecs_compatibility => disabled
index => "udp_syslog-%{+YYYY.MM.dd}"
}
}
}
docker-compose.yml //注意logstash 子產品中添加 5140的UDP端口映射
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
ports:
- "5044:5044"
- "5000:5000/tcp"
- "5000:5000/udp"
- "5140:5140/udp"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
#network_mode: bridge //使用預設的docker0網橋
depends_on:
- elasticsearch
以上配置正确并啟動容器後
- Elastic設定
Docker-compose部署ELK基礎環境安裝指令補全安裝dock-compose部署安裝ELKLogstsh端口測試 Docker-compose部署ELK基礎環境安裝指令補全安裝dock-compose部署安裝ELKLogstsh端口測試 Docker-compose部署ELK基礎環境安裝指令補全安裝dock-compose部署安裝ELKLogstsh端口測試 - 使用telent 測試 tcp5000 端口後建立索引時會看到5000端口對應生成的索引。可以看到現在并沒有udp 5140的索引。
- 測試5140 UDP口 (模拟一個 syslog消息)
提供一個python的測試腳本
import logging
import logging.handlers # handlers要單獨import
logger = logging.getLogger()
fh = logging.handlers.SysLogHandler(('192.168.6.151', 5140), logging.handlers.SysLogHandler.LOG_AUTH)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
logger.addHandler(fh)
logger.warning("msg4")
logger.error("msg4")
- index中已經出現udp_syslog 的tag索引
Docker-compose部署ELK基礎環境安裝指令補全安裝dock-compose部署安裝ELKLogstsh端口測試 Docker-compose部署ELK基礎環境安裝指令補全安裝dock-compose部署安裝ELKLogstsh端口測試 - 檢視tcp 5000 以及 udp 5140 的具體日志
- TCP 5000 的telnet 日志
Docker-compose部署ELK基礎環境安裝指令補全安裝dock-compose部署安裝ELKLogstsh端口測試 - UDP 5140 摸拟sysylog 的日志
Docker-compose部署ELK基礎環境安裝指令補全安裝dock-compose部署安裝ELKLogstsh端口測試
logstash.conf 的 filter grok 解析及dissect分割(Huawei 交換機中的log收集分析