天天看點

http 錯誤 500.0 - internal server error_HTTP頭資訊洩露

漏洞描述:在伺服器傳回的HTTP頭中洩露伺服器資訊

測試方法:burpsuite,fiddler,浏覽器F12檢視響應包的資訊

http 錯誤 500.0 - internal server error_HTTP頭資訊洩露
http 錯誤 500.0 - internal server error_HTTP頭資訊洩露

單個網站可以這樣檢視,但是網站多了,這樣做就太麻煩了,有100個網站難道要檢視100次,有的人願意檢視100次, 我比較懶,寫個腳本直接批量查詢。

單個檢視的腳本,代碼用法如下:

import sys
import requests


#從指令行擷取url
url = sys.argv[1]
#自定義user-agent
header = {"User-Agent":"Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)"}


#定義查詢server資訊的函數
def get_url_server(url):
    try:
        url_html = requests.get(url,headers = header,timeout=0.5)
        url_server = url_html.headers['Server']
        print(url+"t的Server是:t" + url_server)
    except IOError:
        print("Error ,please again")

if __name__ == "__main__":
    get_url_server(url)
           
http 錯誤 500.0 - internal server error_HTTP頭資訊洩露

批量查詢

将待查詢的url放入一個txt檔案,例如url.txt

代碼如下:

import sys
import requests


#從指令行擷取url
#url = sys.argv[1]
#自定義user-agent
header = {"User-Agent":"Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)"}


#定義查詢server資訊的函數
def get_url_server(url):
    try:
        url_html = requests.get(url,headers = header,timeout=0.5)
        url_server = url_html.headers['Server']
        print(url+"t的Server是:t" + url_server)
    except IOError:
        print("Error ,please again")

if __name__ == "__main__":
    with open("url.txt",'r') as f:
        url_list = f.readlines()
        for i in url_list:
            get_url_server(i[:-1])
           
http 錯誤 500.0 - internal server error_HTTP頭資訊洩露