天天看点

【记录】Logstash conf 配置文件介绍confg 启动方式conf的模板 样例2样例3样例4input介绍输入源公共属性 文件的输入源标准输入源redis输入源实验

 版本

logstash-7.13.1

官网参考

https://www.elastic.co/guide/en/logstash/current/config-examples.html

https://www.elastic.co/guide/en/logstash/current/configuration-file-structure.html

https://www.elastic.co/guide/en/logstash/current/event-dependent-configuration.html#conditionals

https://www.elastic.co/guide/en/logstash/current/input-plugins.html

https://www.elastic.co/guide/en/logstash/current/configuration-file-structure.html
           

confg 启动方式

bin/logstash -f logstash-simple.conf
           

conf的模板

# This is a comment. You should use comments to describe
# parts of your configuration.
input {
  ...
}

filter {
  ...
}

output {
  ...
}
           

样例1:

input {
  file {
    path => "/var/log/messages"
    type => "syslog"
  }

  file {
    path => "/var/log/apache/access.log"
    type => "apache"
  }
}
           

样例2

input { stdin { } }

filter {
  grok {
    match => { "message" => "%{COMBINEDAPACHELOG}" }
  }
  date {
    match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
}

output {
  elasticsearch { hosts => ["localhost:9200"] }
  stdout { codec => rubydebug }
}
           

样例3

input {
  file {
    path => "/tmp/access_log"
    start_position => "beginning"
  }
}

filter {
  if [path] =~ "access" {
    mutate { replace => { "type" => "apache_access" } }
    grok {
      match => { "message" => "%{COMBINEDAPACHELOG}" }
    }
  }
  date {
    match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
}

output {
  elasticsearch {
    hosts => ["localhost:9200"]
  }
  stdout { codec => rubydebug }
}
           

样例4

input {
  file {
    path => "/tmp/*_log"
  }
}

filter {
  if [path] =~ "access" {
    mutate { replace => { type => "apache_access" } }
    grok {
      match => { "message" => "%{COMBINEDAPACHELOG}" }
    }
    date {
      match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
    }
  } else if [path] =~ "error" {
    mutate { replace => { type => "apache_error" } }
  } else {
    mutate { replace => { type => "random_logs" } }
  }
}

output {
  elasticsearch { hosts => ["localhost:9200"] }
  stdout { codec => rubydebug }
}
           

input介绍

对于input可以从很多源获取数据

An input plugin enables a specific source of events to be read by Logstash.

azure_event_hubs Receives events from Azure Event Hubs azure_event_hubs
beats Receives events from the Elastic Beats framework logstash-input-beats
cloudwatch Pulls events from the Amazon Web Services CloudWatch API logstash-input-cloudwatch
couchdb_changes Streams events from CouchDB’s 

_changes

 URI
logstash-input-couchdb_changes
dead_letter_queue read events from Logstash’s dead letter queue logstash-input-dead_letter_queue
elasticsearch Reads query results from an Elasticsearch cluster logstash-input-elasticsearch
exec Captures the output of a shell command as an event logstash-input-exec
file Streams events from files logstash-input-file
ganglia Reads Ganglia packets over UDP logstash-input-ganglia
gelf Reads GELF-format messages from Graylog2 as events logstash-input-gelf
generator Generates random log events for test purposes logstash-input-generator
github Reads events from a GitHub webhook logstash-input-github
google_cloud_storage Extract events from files in a Google Cloud Storage bucket logstash-input-google_cloud_storage
google_pubsub Consume events from a Google Cloud PubSub service logstash-input-google_pubsub
graphite Reads metrics from the 

graphite

 tool
logstash-input-graphite
heartbeat Generates heartbeat events for testing logstash-input-heartbeat
http Receives events over HTTP or HTTPS logstash-input-http
http_poller Decodes the output of an HTTP API into events logstash-input-http_poller
imap Reads mail from an IMAP server logstash-input-imap
irc Reads events from an IRC server logstash-input-irc
java_generator Generates synthetic log events core plugin
java_stdin Reads events from standard input core plugin
jdbc Creates events from JDBC data logstash-integration-jdbc
jms Reads events from a Jms Broker logstash-input-jms
jmx Retrieves metrics from remote Java applications over JMX logstash-input-jmx
kafka Reads events from a Kafka topic logstash-integration-kafka
kinesis Receives events through an AWS Kinesis stream logstash-input-kinesis
log4j Reads events over a TCP socket from a Log4j 

SocketAppender

 object
logstash-input-log4j
lumberjack Receives events using the Lumberjack protocl logstash-input-lumberjack
meetup Captures the output of command line tools as an event logstash-input-meetup
pipe Streams events from a long-running command pipe logstash-input-pipe
puppet_facter Receives facts from a Puppet server logstash-input-puppet_facter
rabbitmq Pulls events from a RabbitMQ exchange logstash-integration-rabbitmq
redis Reads events from a Redis instance logstash-input-redis
relp Receives RELP events over a TCP socket logstash-input-relp
rss Captures the output of command line tools as an event logstash-input-rss
s3 Streams events from files in a S3 bucket logstash-input-s3
s3-sns-sqs Reads logs from AWS S3 buckets using sqs logstash-input-s3-sns-sqs
salesforce Creates events based on a Salesforce SOQL query logstash-input-salesforce
snmp Polls network devices using Simple Network Management Protocol (SNMP) logstash-input-snmp
snmptrap Creates events based on SNMP trap messages logstash-input-snmptrap
sqlite Creates events based on rows in an SQLite database logstash-input-sqlite
sqs Pulls events from an Amazon Web Services Simple Queue Service queue logstash-input-sqs
stdin Reads events from standard input logstash-input-stdin
stomp Creates events received with the STOMP protocol logstash-input-stomp
syslog Reads syslog messages as events logstash-input-syslog
tcp Reads events from a TCP socket logstash-input-tcp
twitter Reads events from the Twitter Streaming API logstash-input-twitter
udp Reads events over UDP logstash-input-udp
unix Reads events over a UNIX socket logstash-input-unix
varnishlog Reads from the 

varnish

 cache shared memory log
logstash-input-varnishlog
websocket Reads events from a websocket logstash-input-websocket
wmi Creates events based on the results of a WMI query logstash-input-wmi
xmpp Receives events over the XMPP/Jabber protocol logstash-input-xmpp

输入源公共属性

The following configuration options are supported by all input plugins:

Setting Input type Required

add_field

hash No

codec

codec No

enable_metric

boolean No

id

string No

tags

array No

type

string No

详细说明

add_field

  • Value type is hash
  • Default value is 

    {}

Add a field to an event

codec

  • Value type is codec
  • Default value is 

    "json"

The codec used for input data. Input codecs are a convenient method for decoding your data before it enters the input, without needing a separate filter in your Logstash pipeline.

enable_metric

  • Value type is boolean
  • Default value is 

    true

Disable or enable metric logging for this specific plugin instance by default we record all the metrics we can, but you can disable metrics collection for a specific plugin.

id

  • Value type is string
  • There is no default value for this setting.

Add a unique 

ID

 to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 redis inputs. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.

input {
  redis {
    id => "my_plugin_id"
  }
}
           

Variable substitution in the 

id

 field only supports environment variables and does not support the use of values from the secret store.

tags

  • Value type is array
  • There is no default value for this setting.

Add any number of arbitrary tags to your event.

This can help with processing later.

type

  • Value type is string
  • There is no default value for this setting.

Add a 

type

 field to all events handled by this input.

Types are used mainly for filter activation.

文件的输入源

file Streams events from files logstash-input-file

标准输入源

stdin Reads events from standard input logstash-input-stdin

redis输入源

redis Reads events from a Redis instance logstash-input-redis

属性参考如下:

Setting Input type Required

batch_count

number No

command_map

hash No

data_type

string, one of 

["list", "channel", "pattern_channel"]

Yes

db

number No

host

string No

path

string No

key

string Yes

password

password No

port

number No

ssl

boolean No

threads

number No

timeout

number No
https://www.elastic.co/guide/en/logstash/current/plugins-inputs-redis.html
           

data_type

  • 必须字段.
  • 值可以是三个中的任何一个: 

    list

    channel

    pattern_channel

  • There is no default value for this setting.

Specify either list or channel. If 

data_type

 is 

list

, then we will BLPOP the key. If 

data_type

 is 

channel

, then we will SUBSCRIBE to the key. If 

data_type

 is 

pattern_channel

, then we will PSUBSCRIBE to the key.

db

  • Value type is number
  • Default value is 

The Redis database number.

host

  • Value type is string
  • Default value is 

    "127.0.0.1"

The hostname of your Redis server.

path

  • Value type is string
  • There is no default value for this setting.
  • Path will override Host configuration if both specified.

The unix socket path of your Redis server.

key

  • This is a required setting.
  • Value type is string
  • There is no default value for this setting.

The name of a Redis list or channel.

password

  • Value type is password
  • There is no default value for this setting.

Password to authenticate with. There is no authentication by default.

port

  • Value type is number
  • Default value is 

    6379

The port to connect on.

实验

机器1: redis + springboot + logstash (192.168.1.100)

input 如下

input {
	file {
		path => "/home/lxp/logs/*.log"
		codec => multiline {
            pattern => "^(\[%{TIMESTAMP_ISO8601}\])"
            negate => true
            what => "previous"
        }
		type => "springboot"
		start_position => "beginning"
        sincedb_path => "NUL"
		
	}
}
 
filter {
 
}
 
output {
	if [type] == "springboot" {
		redis {
			data_type => "list"
			host => "192.168.1.110"
			db => "0"
			port => "6379"
			key => "logstash_service"
		}
		
		stdout { 
			codec => rubydebug 
		}
	}
	
}
           

机器2: Logstash + kibana + es

input {
	redis {
		key => "logstash_service"
		host => "192.168.1.110"
		port => 6379
		db => "0"
		data_type => "list"
		type  => "springboot"
	}
}
output {
    if [type] == "springboot" {
        elasticsearch {
            hosts => ["192.168.1.101:9200"]
            index => "spring-%{+YYYY.MM.dd}"
        }
    }
	stdout { 
		codec => rubydebug 
	}
}
           

继续阅读