laitimes

Threat hunting: ELK-based log monitoring

author:Hetian Cyber Security Laboratory
Threat hunting: ELK-based log monitoring

#0x0 Overview

The ELK Stack is formerly known as the Elastic Stack, a portfolio of free and open-source software designed by Elastic specifically for centralized log management. It allows searching, analyzing, and visualizing logs from different sources.

To install and configure ELK Stack on ubuntu, you need the following prerequisites:

  • Ubuntu 20.04
  • It is best to configure it with root privileges

#0x1 Content catalog

  • ELK Stack components
  • Install Java and all dependencies
  • Install and configure Elasticsearch
  • Install and configure Logstash
  • Install and configure Kibana
  • Install and configure Nginx
  • Install and configure Filebeat
  • Configure Linux logs to Elasticsearch
  • Create a log dashboard in Kibana
  • Monitor SSH events

#0x2 ELK Stack 组成

1. Elasticsearch: Elasticsearch is an open-source search engine based on Apache Lucene(TM), which uses RESTful APIs to store and retrieve data.

2. Logstash: Logstash is an open-source data collection engine that can collect data from different data sources and send it to Elasticsearch

3. Kibana: A web visualization platform for analyzing and visualizing logs

4. Filebeat: A lightweight log collection and forwarder that can forward data collection to Logstash or Elasticsearch

Threat hunting: ELK-based log monitoring

#0x3 Install Java and all dependencies

Elasticsearch is a program written in Java, so you need to install the JDK, you can use the following command to install OpenJDK and some other required packages.

sudo apt install -y openjdk-14-jdk wget apt-transport-https curl
           
Threat hunting: ELK-based log monitoring

Then import the public key of Elasticsearch and add the apt software source

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
           
Threat hunting: ELK-based log monitoring

Add a software repository

Threat hunting: ELK-based log monitoring

#0x4 Install and configure Elasticsearch

Update the software repository

sudo apt update
           
Threat hunting: ELK-based log monitoring

Then install (domestic installation is relatively slow, please be patient)

sudo apt-get install elasticsearch
           
Threat hunting: ELK-based log monitoring

After the installation is complete, configure Elasticsearch

By default, Eelasticsearh listens on port 9200. For security, you need to set the restriction of external network access. Make it impossible for external networks to access data and elastic clusters via REST APIs. The configuration file of Elasticsearch is elasticsearch.yml. Just modify it.

Open the profile

sudo gedit  /etc/elasticsearch/elasticsearch.yml
           

Find the listener interface and port and modify it

Threat hunting: ELK-based log monitoring

Delete the previous comment symbol # and change it to look like this:

Threat hunting: ELK-based log monitoring

Save it, and then start the Elasticsearch service

sudo systemctl start elasticsearch
           
Threat hunting: ELK-based log monitoring

Check the service status and verify that it has started

sudo systemctl status elasticsearch
           
Threat hunting: ELK-based log monitoring
curl -X GET localhost:9200
           
Threat hunting: ELK-based log monitoring

If you see this, it means that Elasticsearch has been successfully launched.

You can also access https://localhost:9200 view it in your browser

Threat hunting: ELK-based log monitoring

#0x5 Install and configure Logstash

First, make sure that openssl is installed on the system, and then install Logstash

openssl version -a
sudo apt install logstash -y
           
Threat hunting: ELK-based log monitoring
Threat hunting: ELK-based log monitoring

Create an SSL certificate to ensure the security of Rsyslog and Filebeat data transmission to Logstash.

Create an SSL directory in the configuration file directory of Logstash and generate a certificate

sudo mkdir -p /etc/logstash/ssl
cd /etc/logstash
sudo openssl req -subj '/CN=elkmaster/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout ssl/logstash-forwarder.key -out ssl/logstash-forwarder.crt
           
Threat hunting: ELK-based log monitoring
Threat hunting: ELK-based log monitoring
Threat hunting: ELK-based log monitoring

In order to facilitate the subsequent configuration, we can modify the /etc/hosts file. Set the IP address of the host with a hostname

Threat hunting: ELK-based log monitoring

Then we need to configure three files, which are filebeat-input.conf for receiving data from filebeat, syslog-filter.conf for filtering syslogs, and output-elasticsearch.conf for outputting data to elasticsearch.

Create a filebeat-input.conf file in the logstash configuration directory

cd /etc/logstash/
sudo gedit conf.d/filebeat-input.conf
           
Threat hunting: ELK-based log monitoring

Add the following:

input {
  beats {
    port => 5443
    type => syslog
    ssl => true
    ssl_certificate => "/etc/logstash/ssl/logstash-forwarder.crt"
    ssl_key => "/etc/logstash/ssl/logstash-forwarder.key"
  }
}
           
Threat hunting: ELK-based log monitoring

Then create the filter configuration file syslog-filter.conf and use the grok filter, which is used to have Logstash extract the data according to the rules given.

sudo gedit conf.d/syslog-filter.conf
           

Enter the following:

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}
           

Then create an output-elasticsearch.conf configuration file to transfer data to Elasticsearch.

sudo gedit conf.d/output-elasticsearch.conf
           

It reads as follows:

output {
  elasticsearch { hosts => ["localhost:9200"]
    hosts => "localhost:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}
           

After the configuration file is ready, start the logstash service to see if it is normal.

sudo systemctl start logstash
sudo systemctl status logstash
           
Threat hunting: ELK-based log monitoring

If no error is reported, the service starts normally.

#0x6 Install and configure Kibana

Installing Kibana can also be done via apt

sudo apt install kibana
           

Once the installation is complete, let's set up the Kibana configuration file

sudo gedit /etc/kibana/kibana.yml
           

You can modify the listening port and address, as well as the Elasticsearch address

Threat hunting: ELK-based log monitoring

Save and then launch the Kibana service

Threat hunting: ELK-based log monitoring

Then you can access it directly in your browser

Threat hunting: ELK-based log monitoring

#0x7 Install and configure Nginx

Installing this is mainly for Kibana as a reverse proxy.

Install Nginx and Apache2-utlis first

sudo apt install nginx apache2-utils -y
           

Once the installation is complete, create a Kibana virtual host configuration file

sudo gedit /etc/nginx/sites-available/kibana
           

It reads as follows:

server {
    listen 80;
    server_name localhost;
    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/.kibana-user;
    location / {
        proxy_pass https://localhost:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}
           

Create a connection to the configuration file

sudo ln -s /etc/nginx/sites-available/kibana /etc/nginx/sites-enabled/
           

Then, configure a basic authentication for accessing the Kibana Dashboard

sudo htpasswd -c /etc/nginx/.kibana-user elastic
           
Threat hunting: ELK-based log monitoring

Then test the Nginx configuration file and start the service

sudo nginx -t
sudo systemctl restart nginx
           
Threat hunting: ELK-based log monitoring

#0x8 Install and configure Filebeat

Download filebeat and then install it

Download link: https://www.elastic.co/cn/downloads/beats/filebeat

Threat hunting: ELK-based log monitoring

You can download it according to your needs

We are here to install on Ubuntu, so choose the DEB version to download. Of course, you can also install it directly with apt, provided that you have previously added the Elastic repositories. You can see the official guide to add the software source: https://www.elastic.co/guide/en/beats/filebeat/7.10/setup-repositories.html#_apt

sudo apt install filebeat -y
           
Threat hunting: ELK-based log monitoring

Then edit the configuration of the filebeat, the path of the configuration file:

/etc/filebeat/filebeat.yml
           

First, change the input part to true

Threat hunting: ELK-based log monitoring

然后修改Elasticsearch output部分

Threat hunting: ELK-based log monitoring

Modify the configuration to the following: (set according to your actual situation)

Threat hunting: ELK-based log monitoring

Modify the Kibana configuration section:

Threat hunting: ELK-based log monitoring

Save it when you're done modifying.

然后初始化filebeat

sudo filebeat setup
           
Threat hunting: ELK-based log monitoring

Copy the logstash-forwarder.crt certificate to the /etc/filebeat directory

sudo cp /etc/logstash/ssl/logstash-forwarder.crt /etc/filebeat/
           

Then start the filebeat service

sudo systemctl start filebeat
           

#0x9 Configure Linux logs to Elasticsearch

Configure rsyslog to Logstash, and the logs will be automatically transferred to Elasticsearch

Before configuring logs to Logstash, we need to configure log forwarding between Logstash and Elasticsearch.

Create a configuration file in the /etc/logstash/conf.d directory to configure log forwarding between Elasticsearch.

cd /etc/logstash/conf.d/
sudo gedit logstash.conf
           

The contents of the configuration file are as follows:

input {
  udp {
    host => "127.0.0.1"
    port => 10514
    codec => "json"
    type => "rsyslog"
  }
}
                                                                                          
# The Filter pipeline stays empty here, no formatting is done.
filter { } 

                      
# Every single log will be forwarded to ElasticSearch. If you are using another port, you should specify it here.                              
output {
  if [type] == "rsyslog" {
    elasticsearch {
      hosts => [ "localhost:9200" ]
    }
  }
  }
           

The configuration file is mainly composed of three parts: the input part: where the logs come from, the filter part: the log filter, and the output part: what address the logs are transmitted to.

Then let's restart the logstash service

sudo systemctl restart logstash
           

Then configure log forwarding from rsyslog to Logstash, rsyslog can use a template to convert the logs and then forward them.

In order for rsyslog to forward logs, you need to create a 70-output.conf configuration file in the /etc/rsylog.d directory.

cd /etc/rsyslog.d/
sudo gedit 70-output.conf
           

Add the following:

*.*                         @127.0.0.1:10514;json-template
           

Meaning all logs are sent to 127.0.0.1:10514 and converted using a template in JSON format

We need to create a template file in JSON format

sudo gedit 01-json-template.conf
           

It reads as follows:

template(name="json-template"
  type="list") {
    constant(value="{")
      constant(value="\"@timestamp\":\"")     property(name="timereported" dateFormat="rfc3339")
      constant(value="\",\"@version\":\"1")
      constant(value="\",\"message\":\"")     property(name="msg" format="json")
      constant(value="\",\"sysloghost\":\"")  property(name="hostname")
      constant(value="\",\"severity\":\"")    property(name="syslogseverity-text")
      constant(value="\",\"facility\":\"")    property(name="syslogfacility-text")
      constant(value="\",\"programname\":\"") property(name="programname")
      constant(value="\",\"procid\":\"")      property(name="procid")
    constant(value="\"}\n")
}
           

Then start the rsyslog service

sudo systemctl start rsyslog
           

Check whether the logstash listening port is normal.

ss -na | grep 10514
           
Threat hunting: ELK-based log monitoring

If the listener is unsuccessful and the following error message is displayed in the logs:

Threat hunting: ELK-based log monitoring

It is because there are syntax errors in the configuration file, and the ELK software has strict syntax requirements for the configuration file, please check carefully.

#0x10 Create a log dashboard in Kibana

Open the Kibana interface in your browser

The first thing you need to do is create an index schema

然后找到Stack Management---Kibana中的Index Patterns

Threat hunting: ELK-based log monitoring

然后点击Create index pattern

Threat hunting: ELK-based log monitoring

Type logstash -* and click Next step

Threat hunting: ELK-based log monitoring

Then the time filter we select @timestamp

Threat hunting: ELK-based log monitoring

然后点击Create index pattern

Threat hunting: ELK-based log monitoring

After the addition is successful, it looks like this:

Threat hunting: ELK-based log monitoring

Click back to Kibana's Discover, where you can search for your data

Threat hunting: ELK-based log monitoring
Threat hunting: ELK-based log monitoring

#0x11 Monitor SSH events

In the filter, we set the filter to programename:sshd*

Threat hunting: ELK-based log monitoring

This will allow you to see the SSHD program-related events.

#0x12 More references

Configure SSL, TLS, and HTTPS to secure Elasticsearch, Kibana, Beats, and Logstash Elastic Blog https://www.elastic.co/cn/blog/configuring-ssl-tls-and-https-to-secure-elasticsearch-kibana-beats-and-logstash

How to use the Elastic Stack to monitor Nginx web server | Elastic Blog https://www.elastic.co/cn/blog/how-to-monitor-nginx-web-servers-with-the-elastic-stack

This article was originally written by Hantinetan Laboratory, please indicate the source for reprinting.

#0x13 About the Hetian Cybersecurity Laboratory

Hetian Cyber Security Lab www.hetianlab.com - a leading practical online network security education platform in China

Real environment, online practice to learn cyber security; The experimental content covers: system security, software security, network security, web security, mobile security, CTF, forensic analysis, penetration testing, network security awareness education, etc.