天天看點

ELK logstash輸出插件Elasticsearch

輸出插件(Output)

輸出階段:将處理完成的日志推送到遠端資料庫存儲

插件:

file

Elasticsearch

Elasticsearch

If you plan to use the Kibana web interface to analyze data transformed by Logstash, use the Elasticsearch output plugin to get your data into Elasticsearch.

如果您打算使用Kibana Web界面來分析Logstash轉換的資料,請使用Elasticsearch輸出插件将資料導入Elasticsearch。

Writing to different indices: best practices

You cannot use dynamic variable substitution when ​

​ilm_enabled​

​​ is ​

​true​

​​ and when using ​

​ilm_rollover_alias​

​.

If you’re sending events to the same Elasticsearch cluster, but you’re targeting different indices you can:

如果您要将事件發送到同一Elasticsearch叢集,但要針對不同的索引,則可以:

  • use different Elasticsearch outputs, each one with a different value for the​

    ​index​

    ​​ parameter(使用不同的Elasticsearch輸出,每個輸出的​

    ​index​

    ​參數 值都不同)
  • use one Elasticsearch output and use the dynamic variable substitution for the​

    ​index​

    ​​ parameter(使用一個Elasticsearch輸出并将動态變量替換為​

    ​index​

    ​參數)

Each Elasticsearch output is a new client connected to the cluster:

  • it has to initialize the client and connect to Elasticsearch (restart time is longer if you have more clients)
  • it has an associated connection pool

In order to minimize the number of open connections to Elasticsearch, maximize the bulk size and reduce the number of "small" bulk requests (which could easily fill up the queue), it is usually more efficient to have a single Elasticsearch output.

Example:

output {
      elasticsearch {
        index => "%{[some_field][sub_field]}-%{+YYYY.MM.dd}"
      }
    }      

What to do in case there is no field in the event containing the destination index prefix?

You can use the ​

​mutate​

​​ filter and conditionals to add a ​

​[@metadata]​

​​ field (see ​​https://www.elastic.co/guide/en/logstash/7.9/event-dependent-configuration.html#metadata​​​) to set the destination index for each event. The ​

​[@metadata]​

​ fields will not be sent to Elasticsearch.

Example:

filter {
      if [log_type] in [ "test", "staging" ] {
        mutate { add_field => { "[@metadata][target_index]" => "test-%{+YYYY.MM}" } }
      } else if [log_type] == "production" {
        mutate { add_field => { "[@metadata][target_index]" => "prod-%{+YYYY.MM.dd}" } }
      } else {
        mutate { add_field => { "[@metadata][target_index]" => "unknown-%{+YYYY}" } }
      }
    }
    output {
      elasticsearch {
        index => "%{[@metadata][target_index]}"
      }
    }      

​hosts​

  • Value type is​​uri​​
  • Default value is​

    ​[//127.0.0.1]​

Sets the host(s) of the remote instance. If given an array it will load balance requests across the hosts specified in the ​

​hosts​

​​ parameter. Remember the ​

​http​

​​ protocol uses the ​​http​​ address (eg. 9200, not 9300).

Examples:

`"127.0.0.1"`
`["127.0.0.1:9200","127.0.0.2:9200"]`
`["http://127.0.0.1"]`
`["https://127.0.0.1:9200"]`
`["https://127.0.0.1:9200/mypath"]` (If using a proxy on a subpath)      
  • Value type is​​string​​
  • Default value depends on whether​​ecs_compatibility​​ is enabled:
  • ECS Compatibility disabled:​

    ​"logstash-%{+yyyy.MM.dd}"​

  • ECS Compatibility enabled:​

    ​"ecs-logstash-%{+yyyy.MM.dd}"​

繼續閱讀