天天看點

Python3之urllib子產品

簡介

  urllib是python的一個擷取url(Uniform Resource Locators,統一資源定位符),可以用來抓取遠端的資料。

常用方法

(1)urlopen

  urllib.request.urlopen(url, data=None,[timeout,]*,cafile=None,capath=None,cadefault=False,context=None)

urllib.request.urlopen() 可以擷取頁面,擷取頁面内容的資料格式為bytes類型,需要進行decode()解碼,轉換成str類型。

參數說明:

  • url : 需要打開的網址
  • data : 字典形式,預設為None時是GET方法,data不為空時, urlopen()的送出方式為POST,注意POST送出時,data需要轉換為位元組;
  • timeout : 設定網站通路的逾時時間
from urllib import request
response =  request.urlopen("http://members.3322.org/dyndns/getip")
# <http.client.HTTPResponse object at 0x031F63B0>
page = response.read()
# b'106.37.169.186\n'
page = page.decode("utf-8")
# '106.37.169.186\n'
      
# 使用with語句
with request.urlopen("http://members.3322.org/dyndns/getip") as response:
    page = response.read()
    print(page.decode("utf-8"))
      

  注意:urllib.request 使用相同的接口來處理所有類型的url,比如:

req = urllib.request.urlopen('ftp://example.com/')
      

  

  urlopen傳回對象提供的方法:

  • read(),readline(),readlines(),fileno(),close() : 對HTTPResponse類型資料進行操作
  • info() : 傳回HTTPMessage對象,表示遠端伺服器傳回的頭資訊
  • getcode() : 傳回HTTP狀态碼,如果是http請求,200請求成功完成,404網頁未找到
  • geturl(): 傳回請求的url

(2)Request

  urllib.request.Request(url,data=None,headers={},method=None)

from urllib import request

url = r'http://www.lagou.com/zhaopin/Python/?labelWords=label'
headers = {
    'User-Agent': r'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) '
                  r'Chrome/45.0.2454.85 Safari/537.36 115Browser/6.0.3',
    'Referer': r'http://www.lagou.com/zhaopin/Python/?labelWords=label',
    'Connection': 'keep-alive'
}
req = request.Request(url, headers=headers)
page = request.urlopen(req).read()
page = page.decode('utf-8')
      

(3)parse.urlencode

  urllib.parse.urlencode(query, doseq=False,safe='',encoding=None,errors=None)

urlencode()的主要作用就是将url附上要送出的資料. 對data資料進行編碼。

from urllib import request, parse
url = r'http://www.lagou.com/jobs/positionAjax.json?'
headers = {
    'User-Agent': r'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) '
                  r'Chrome/45.0.2454.85 Safari/537.36 115Browser/6.0.3',
    'Referer': r'http://www.lagou.com/zhaopin/Python/?labelWords=label',
    'Connection': 'keep-alive'
}
data = {
    'first': 'true',
    'pn': 1,
    'kd': 'Python'
}
data = parse.urlencode(data).encode('utf-8')
# 此時data是位元組 b'first=true&pn=1&kd=Python' ,POST的資料必須是bytes或者iterable of bytes,不能是str,是以需要encode編碼
# 經過urlencode轉換後的data資料為'first=true&pn=1&kd=Python'
# 最後送出的url為:http://www.lagou.com/jobs/positionAjax.json?first=true?pn=1?kd=Python
req = request.Request(url, headers=headers, data=data)
# 此時req : <urllib.request.Request object at 0x02F52A30>
page = request.urlopen(req).read()
# 此時page是位元組: b'{"success":false,"msg":"\xe6\x82\xa8\xe6\x93\x8d\xe4\xbd\x9c\xe5\xa4\xaa\xe9\xa2\x91\xe7\xb9\x81,\xe8\xaf\xb7\xe7\xa8\x8d\xe5\x90\x8e\xe5\x86\x8d\xe8\xae\xbf\xe9\x97\xae","clientIp":"106.37.169.186"}\n
page = page.decode('utf-8')
# 此時page是字元串:"success":false,"msg":"您操作太頻繁,請稍後再通路","clientIp":"106.37.169.186"}
      

(4)代理 request.ProxyHandler(proxies=None)

當需要抓取的網站設定了通路限制,這時就需要用到代理來抓取資料。

from urllib import request, parse
data = {
        'first': 'true',
        'pn': 1,
        'kd': 'Python'
    }
proxy = request.ProxyHandler({'http': '5.22.195.215:80'})  # 設定proxy
opener = request.build_opener(proxy)  # 挂載opener
request.install_opener(opener)  # 安裝opener
data = parse.urlencode(data).encode('utf-8')
page = opener.open(url, data).read()
page = page.decode('utf-8')
return page
      

(5)異常處理

  urlopen在不能處理某個響應的時候會抛出URLError, HTTPError是URLError的子類,在遇到HTTP URL的特殊情況時被抛出。異常類來自于 urllib.error 子產品。

URLError : 

  一般來說,URLError被抛出是因為沒有網絡連接配接(沒有到指定伺服器的路徑),或者是指定伺服器不存在。在這種情況下,抛出的異常将會包含一個‘reason’屬性,

這是包含一個錯誤碼和一段錯誤資訊的元組.

req = urllib.request.Request('http://www.pretend_server.org')
try: 
	urllib.request.urlopen(req)
except urllib.error.URLError as e:
    print(e.reason)      
# 輸出
(4, 'getaddrinfo failed')
      

HTTPError :

  每一個來自伺服器的HTTP響應都包含一個數字的“狀态碼”。有時狀态碼表明伺服器不能執行請求。預設的處理程式會為你處理其中的部分響應(比如,如果響應是“重定向”,

要求用戶端從一個不同的URL中擷取資料,那麼urllib将會為你處理這個)。對于那些不能處理的響應,urlopen将會抛出一個HTTPError。

典型的錯誤包括‘404’(頁面未找到),‘403’(請求禁止),和‘401’(請求認證)。

Python3之urllib子產品
Python3之urllib子產品
# Table mapping response codes to messages; entries have the
# form {code: (shortmessage, longmessage)}.
responses = {
    100: ('Continue', 'Request received, please continue'),
    101: ('Switching Protocols',
          'Switching to new protocol; obey Upgrade header'),

    200: ('OK', 'Request fulfilled, document follows'),
    201: ('Created', 'Document created, URL follows'),
    202: ('Accepted',
          'Request accepted, processing continues off-line'),
    203: ('Non-Authoritative Information', 'Request fulfilled from cache'),
    204: ('No Content', 'Request fulfilled, nothing follows'),
    205: ('Reset Content', 'Clear input form for further input.'),
    206: ('Partial Content', 'Partial content follows.'),

    300: ('Multiple Choices',
          'Object has several resources -- see URI list'),
    301: ('Moved Permanently', 'Object moved permanently -- see URI list'),
    302: ('Found', 'Object moved temporarily -- see URI list'),
    303: ('See Other', 'Object moved -- see Method and URL list'),
    304: ('Not Modified',
          'Document has not changed since given time'),
    305: ('Use Proxy',
          'You must use proxy specified in Location to access this '
          'resource.'),
    307: ('Temporary Redirect',
          'Object moved temporarily -- see URI list'),

    400: ('Bad Request',
          'Bad request syntax or unsupported method'),
    401: ('Unauthorized',
          'No permission -- see authorization schemes'),
    402: ('Payment Required',
          'No payment -- see charging schemes'),
    403: ('Forbidden',
          'Request forbidden -- authorization will not help'),
    404: ('Not Found', 'Nothing matches the given URI'),
    405: ('Method Not Allowed',
          'Specified method is invalid for this server.'),
    406: ('Not Acceptable', 'URI not available in preferred format.'),
    407: ('Proxy Authentication Required', 'You must authenticate with '
          'this proxy before proceeding.'),
    408: ('Request Timeout', 'Request timed out; try again later.'),
    409: ('Conflict', 'Request conflict.'),
    410: ('Gone',
          'URI no longer exists and has been permanently removed.'),
    411: ('Length Required', 'Client must specify Content-Length.'),
    412: ('Precondition Failed', 'Precondition in headers is false.'),
    413: ('Request Entity Too Large', 'Entity is too large.'),
    414: ('Request-URI Too Long', 'URI is too long.'),
    415: ('Unsupported Media Type', 'Entity body in unsupported format.'),
    416: ('Requested Range Not Satisfiable',
          'Cannot satisfy request range.'),
    417: ('Expectation Failed',
          'Expect condition could not be satisfied.'),

    500: ('Internal Server Error', 'Server got itself in trouble'),
    501: ('Not Implemented',
          'Server does not support this operation'),
    502: ('Bad Gateway', 'Invalid responses from another server/proxy.'),
    503: ('Service Unavailable',
          'The server cannot process the request due to a high load'),
    504: ('Gateway Timeout',
          'The gateway server did not receive a timely response'),
    505: ('HTTP Version Not Supported', 'Cannot fulfill request.'),
    }      

錯誤碼

  異常處理方式:

req = urllib.request.Request('http://www.python.org/fish.html')
try:
    urllib.request.urlopen(req)
except urllib.error.HTTPError as e:
    print (e.code)
    print (e.info())
    print (e.geturl())
    print (e.read())
      

  或者:

from urllib.request import Request, urlopen
from urllib.error import URLError
req = Request(someurl)
try:
    response = urlopen(req)
except URLError as e:
    if hasattr(e, 'reason'):
        print('We failed to reach a server.')
        print('Reason: ', e.reason)
    elif hasattr(e, 'code'):
        print('The server couldn\'t fulfill the request.')
        print('Error code: ', e.code)	
      

繼續閱讀