天天看点

http 错误 500.0 - internal server error_HTTP头信息泄露

漏洞描述:在服务器返回的HTTP头中泄露服务器信息

测试方法:burpsuite,fiddler,浏览器F12查看响应包的信息

http 错误 500.0 - internal server error_HTTP头信息泄露
http 错误 500.0 - internal server error_HTTP头信息泄露

单个网站可以这样查看,但是网站多了,这样做就太麻烦了,有100个网站难道要查看100次,有的人愿意查看100次, 我比较懒,写个脚本直接批量查询。

单个查看的脚本,代码用法如下:

import sys
import requests


#从命令行获取url
url = sys.argv[1]
#自定义user-agent
header = {"User-Agent":"Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)"}


#定义查询server信息的函数
def get_url_server(url):
    try:
        url_html = requests.get(url,headers = header,timeout=0.5)
        url_server = url_html.headers['Server']
        print(url+"t的Server是:t" + url_server)
    except IOError:
        print("Error ,please again")

if __name__ == "__main__":
    get_url_server(url)
           
http 错误 500.0 - internal server error_HTTP头信息泄露

批量查询

将待查询的url放入一个txt文件,例如url.txt

代码如下:

import sys
import requests


#从命令行获取url
#url = sys.argv[1]
#自定义user-agent
header = {"User-Agent":"Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)"}


#定义查询server信息的函数
def get_url_server(url):
    try:
        url_html = requests.get(url,headers = header,timeout=0.5)
        url_server = url_html.headers['Server']
        print(url+"t的Server是:t" + url_server)
    except IOError:
        print("Error ,please again")

if __name__ == "__main__":
    with open("url.txt",'r') as f:
        url_list = f.readlines()
        for i in url_list:
            get_url_server(i[:-1])
           
http 错误 500.0 - internal server error_HTTP头信息泄露