laitimes

Python scripts check Linux logs

author:Not bald programmer
Python scripts check Linux logs

overview

Log management is a critical task in modern IT infrastructure. Especially in the Linux environment, due to their openness and flexibility, log files often contain a large amount of system and application running information. This information is an important basis for system administrators to understand the operating status of the system, diagnose problems, optimize performance, ensure security, and other tasks. However, due to the large number of log files and frequent updates, manual log management is often inefficient and error-prone. Therefore, automated log inspection scripts play an important role in the Linux environment.

First of all, log inspection scripts can improve work efficiency. By writing scripts, we can automate tedious tasks such as collecting, analyzing, and archiving log files, greatly reducing the workload of system administrators. In addition, scripts can be run at regular intervals or automatically when specific events occur, ensuring timeliness of log management.

Second, log inspection scripts can improve the accuracy of problem diagnosis. Scripts can automatically detect errors and anomalies in log files based on predefined rules, avoiding omissions and mispositives that may occur during manual inspection. In addition, through the statistics and analysis of log data, scripts can also help us find the performance bottleneck of the system and optimize the system configuration.

Finally, log inspection scripts help improve system security. By analyzing log files, scripts can detect security threats to the system in a timely manner, such as unauthorized access, malware activity, and more. Once these threats are discovered, scripts can automatically send alerts and even execute predefined response strategies to prevent threats from causing more damage to the system.

example

import os
import glob
import logging
import smtplib
import configparser
import re
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from concurrent.futures import ThreadPoolExecutor


# 读取配置文件
config = configparser.ConfigParser()
config.read('config.ini')


# 定义日志文件的位置
log_dir = config.get('DEFAULT', 'log_dir')


# 定义需要检查的错误类型
error_patterns = [re.compile(pattern) for pattern in config.get('DEFAULT', 'error_patterns').split(',')]


# 定义电子邮件服务器设置
smtp_server = config.get('SMTP', 'server')
smtp_port = config.getint('SMTP', 'port')
smtp_username = config.get('SMTP', 'username')
smtp_password = config.get('SMTP', 'password')


# 定义电子邮件接收者
email_recipient = config.get('SMTP', 'recipient')


def check_log(file):
    with open(file, 'r') as f:
        lines = f.readlines()


    for line in lines:
        # 检查每一行是否匹配错误模式
        for pattern in error_patterns:
            if pattern.search(line):
                # 如果匹配到错误,记录到日志并发送电子邮件
                logging.error(f"在文件 {file} 中发现错误:{line.strip()}")
                send_email(f"在文件 {file} 中发现错误:{line.strip()}")


def check_logs():
    # 获取日志目录下的所有日志文件
    log_files = glob.glob(log_dir + "*.log")


    # 使用多线程来提高处理速度
    with ThreadPoolExecutor() as executor:
        executor.map(check_log, log_files)


def send_email(message):
    # 创建电子邮件消息
    msg = MIMEMultipart()
    msg['From'] = smtp_username
    msg['To'] = email_recipient
    msg['Subject'] = "日志巡检发现错误"
    msg.attach(MIMEText(message, 'plain'))


    # 连接到SMTP服务器并发送电子邮件
    server = smtplib.SMTP(smtp_server, smtp_port)
    server.starttls()
    server.login(smtp_username, smtp_password)
    server.send_message(msg)
    server.quit()


if __name__ == "__main__":
    check_logs()           

This script uses Python's re module to match the error message. You can define your own error patterns in the configuration file, which should be valid regular expressions. When the script finds a row that matches the error pattern, it logs to the log and sends an email with the error message.

Note that this script may require the installation of additional Python modules (such as smtplib, email, concurrent.futures, and re), which you can install using the pip install command.

In addition, we can continue to optimize this script, such as adding automatic archiving of log files, and using a database to store and query log data. However, these features can make scripts more complex and may require additional dependencies and configurations. In practice, you may need to decide whether you need these features based on your specific needs and environment.

In addition, for large-scale log analysis tasks, you may want to consider using specialized log analysis tools or services, such as the ELK (Elasticsearch, Logstash, Kibana) stack, or a log service provided by a cloud service provider. These tools and services often provide powerful log collection, storage, analysis, and visualization capabilities that can greatly simplify log management.

Read on