天天看点

elk组件的安装 第一版

OS:centos6.6 64bit

java:jdk-7u79-linux-x64.rpm

E:elasticsearch-1.7.3.tar.gz

L:logstash-1.5.6-1.noarch.rpm

K:kibana-4.1.2-linux-x64.tar.gz

elasticsearch-2.1.1

kibana-4.3.1

logstash-2.1.1-1

filebeat-1.0.1

https://www.elastic.co/downloads/past-releases/logstash-1-5-6 

https://www.elastic.co/downloads/past-releases/kibana-4-1-4

https://www.elastic.co/downloads/past-releases/elasticsearch-1-7-4

rpm -ivh elasticsearch-1.7.4.noarch.rpm

rpm -ivh logstash-1.5.6-1.noarch.rpm

# tar zxvf kibana-4.1.4-linux-x64.tar.gz

//这本身就是拆箱即用的东西, 我们给它找个好地方放起来

# mv kibana-4.1.4-linux-x64/ /usr/local/kibana

# ln -sv /usr/local/kibana/bin/kibana /usr/bin/kibana

#http://mo0on.blog.51cto.com/10522787/1729618  配置redis

# tar zxvf redis-3.0.6.tar.gz

# cd redis-3.0.6

# make PREFIX=/usr/local/redis install

//这里纠结一下, redis如果不指定prefix路径,那么默认会在你这个解压的文件夹中编译生成bin文件

# ln -sv /usr/local/redis/bin/redis-server /usr/bin/redis-server

# ln -sv /usr/local/redis/bin/redis-cli /usr/bin/redis-cli

/usr/share/elasticsearch/bin/elasticsearch

/etc/logstash/conf.d/

logstash配置

 Shipper->Broker->Indexer->ES

其中, Shipper可以理解为监控节点,将所有的(分布式业务中)业务产生的消息(log)发送给Broker,然后,Indexer从Broker读取数据并推送给ES,我们就可以用kibana中获取想要的信息了

这里我们纠结一下input的file配置:

首先,这个file插件只是在进程运行的注册阶段初始化的一个FileWatch对象,它并不能支持l类似fluentd那样的 path => "/path/to/%{+yyyy/MM/dd/hh}.log"

indexer.conf

input {

    redis {

        host => "127.0.0.1"

        port => 6379

        password => "8ff35947f8efe8db806622f6a98a1ea3"

        type => "redis-input"

        data_type => "list"

        key => "key_count"

    }

}

output {

    stdout {}

    elasticsearch {

        cluster => "elasticsearch"

        codec => "json"

        protocol => "http"

        }

shipper.conf

    file {

        path => ["/var/log/*.log", "/var/log/message"]

        type => "system"

        start_position => "beginning"

        stdout {}

        redis {

service logstash start

/usr/bin/kibana &

#ln -sv /usr/local/java/bin/java /usr/bin/java

报错

{"name":"Kibana","hostname":"localhost","pid":18106,"level":30,"msg":"No existing kibana index found","time":"2016-10-14T08:01:33.983Z","v":0}

{"name":"Kibana","hostname":"localhost","pid":18106,"level":30,"msg":"Listening on 0.0.0.0:5601","time":"2016-10-14T08:01:34.002Z","v":0}

http://bbotte.com/logs-service/use-elk-processing-logs-basic-installation-and-config/ 

es插件

<a href="http://172.29.12.193:9200/_plugin/kopf/#!/cluster" target="_blank">http://xx:9200/_plugin/kopf/#!/cluster</a>

启动

nohup /usr/bin/kibana &amp;

logstash  -f /etc/logstash/conf.d/logstash-indexer.conf  or service logstash start

nohup /usr/share/elasticsearch/bin/elasticsearch &amp;

本文转自 liqius 51CTO博客,原文链接:http://mo0on.blog.51cto.com/10522787/1729263,如需转载请自行联系原作者