天天看点

start to use ELKPreparationsInstall ESInstall KibanaInstall LogstashConfig Springboot project

start to use ELK

  • Preparations
  • Install ES
  • Install Kibana
  • Install Logstash
  • Config Springboot project

E lasticsearch L ogstash K ibana

This note is NOT going to use Filebeat, but maybe in later notes.

Preparations

docker installed (if not, just follow official commands, quick and easy)

jdk8 installed

your centos will be at least 2G free mem available.

Install ES

Same as the previous note, if you have installed or read the previous note, please skip this step

# ==== es ====
docker pull elasticsearch:6.5.4
create --name my_elasticsearch_singlenode --net host -e "discovery.type=single-node" -e "network.host=172.17.0.xx" elasticsearch:6.5.4
docker start my_elasticsearch
docker logs my-elasticsearch
           

Install Kibana

# ==== kibana ====
docker pull kibana:6.5.4
docker run --name my_kibana -e ELASTICSEARCH_URL=http://129.211.136.xx:9200 -p 5601:5601 -d kibana:6.5.4
           

Install Logstash

# ==== logstash ====
wget https://download.elasticsearch.org/logstash/logstash/logstash-6.5.4.tar.gz
tar -zxvf logstash-6.5.4.tar.gz
bin/logstash -e 'input{stdin{}}output{stdout{codec=>rubydebug}}'
# try to test by typing hello world, and wait...
cp logstash-sample.conf test.conf
# you will need to config test.conf
nohup bin/logstash -f config/test.conf &
# ==== edit test.conf ====
# port 8888 is to receive logs, and you can change to another, just make sure it is the same as the port defined in Sprintboot project

input {
  tcp {
    port => 8888
    mode => "server"
    ssl_enable => false
    codec => "json"
  }

}

output {
  elasticsearch {   
    hosts => ["http://129.211.136.xx:9200"]
    index => "%{logenv}-%{appname}"
    manage_template => false
    
    #index => "test_index"
    #document_type => "%{[@metadata][type]}"
    #user => "elastic"
    #password => "changeme"
  }
}

           

Config Springboot project

important:

Logstash must be up BEFORE Springboot. So do NOT restart it, or you will need to restart ALL your Springboot projects.

I heard there is a way to solving this, you can check this article

  1. logback-spring.xml

    Add

    logback-spring.xml

    to the place where you put your pom.xml. Its content could be as below.
<configuration>
    <include resource="org/springframework/boot/logging/logback/base.xml" />
    <!-- log standard output -->
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <!-- default encode is PatternLayoutEncoder -->
        <encoder>
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n
            </pattern>
        </encoder>
    </appender>
    <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>129.211.136.xx:8888</destination>
        <!-- must config encoder, pick one from them-->
        <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder">
            <!-- "appname":"LFF-User-Provider" is to name the index, and add field to the doc -->
            <customFields>{"logenv":"test","service":"lff-store-provider-mission","appname":"lff-store-logs"}</customFields>
        </encoder>
    </appender>
    <!-- log daily -->
    <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">

        <file>/opt/devlop/logs/lff-store-provider-mission.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <!-- log output file name -->
            <FileNamePattern>/opt/devlop/logs/lff-store-provider-mission.%d{yyyy-MM-dd}.log</FileNamePattern>
            <MaxHistory>30</MaxHistory>
        </rollingPolicy>
        <layout class="ch.qos.logback.classic.PatternLayout">
            <!-- format:%d means Date,%thread means Thread name, %-5level:means level will begin after 5 char width, %msg:log message, %n means new line -->
            <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n
            </pattern>
        </layout>
        <!-- max size of the log file -->
        <!-- <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy"> <MaxFileSize>10MB</MaxFileSize> </triggeringPolicy> -->
    </appender>

    <!-- define the level of one package or one class, whether it will output logs to root -->
    <!-- addtivity: whether need deliver log to upper level, default is true -->
    <!-- <loger> you can define zero, one or more than one logger<appender-ref>element, marked with this, it will be added to this logger -->
    <!-- name: to define a class or a package that will be restricted by this logger -->
    <!-- level: to set log print level, and that if capitalized or not id not sensitive: TRACE, DEBUG, INFO, WARN, ERROR, ALL and OFF, and one special one INHERITED or NULL(same as INHERITED), it means force to execute the upper level. If this property is not set, the current logger will inherit the upper level. -->
    <logger name="jdbc.sqltiming" level="DEBUG" />
    <logger name="com.ibatis" level="DEBUG" />
    <logger name="com.ibatis.common.jdbc.SimpleDataSource" level="DEBUG" />
    <logger name="com.ibatis.common.jdbc.ScriptRunner" level="DEBUG" />
    <logger name="com.ibatis.sqlmap.engine.impl.SqlMapClientDelegate" level="DEBUG" />
    <logger name="java.sql.Connection" level="DEBUG" />
    <logger name="java.sql.Statement" level="DEBUG" />
    <logger name="java.sql.PreparedStatement" level="DEBUG" />
    <!--  <loger> element, but it is the roort logger. It contains only one level property, cause it is already named to "root"-->
    <root level="DEBUG">
        <appender-ref ref="LOGSTASH" />
    </root>
    <root level="INFO">
        <appender-ref ref="STDOUT" />
        <appender-ref ref="FILE" />
    </root>
</configuration>
           
  1. add dependency

    logstash-logback-encoder

    to your pom.xml
<dependency>
            <groupId>net.logstash.logback</groupId>
            <artifactId>logstash-logback-encoder</artifactId>
            <!--previously 4.11-->
            <version>5.3</version>
        </dependency>

           

You should be able to create index patterns and check logs in Kibana…

I know there should be more Explanations, maybe later… Good Luck!

继续阅读