天天看點

Python 爬蟲從入門到進階之路(十二)

之前的文章我們介紹了 re 子產品和 lxml 子產品來做爬蟲,本章我們再來看一個 bs4 子產品來做爬蟲。

和 lxml 一樣,Beautiful Soup 也是一個HTML/XML的解析器,主要的功能也是如何解析和提取 HTML/XML 資料。

lxml 隻會局部周遊,而Beautiful Soup 是基于HTML DOM的,會載入整個文檔,解析整個DOM樹,是以時間和記憶體開銷都會大很多,是以性能要低于lxml。

BeautifulSoup 用來解析 HTML 比較簡單,API非常人性化,支援CSS選擇器、Python标準庫中的HTML解析器,也支援 lxml 的 XML解析器。

Beautiful Soup 3 目前已經停止開發,推薦現在的項目使用Beautiful Soup 4。使用 pip 安裝即可:

pip install beautifulsoup4

官方文檔:http://beautifulsoup.readthedocs.io/zh_CN/v4.4.0
抓取工具 速度 使用難度 安裝難度
正則 最快 困難 無(内置)
BeautifulSoup 最簡單 簡單
lxml 簡單 一般

首先必須要導入 bs4 庫

1 from bs4 import BeautifulSoup
 2 
 3 html = """
 4 <div>
 5     <ul>
 6          <li class="item-0"><a href="link1.html">first item</a></li>
 7          <li class="item-1"><a href="link2.html">second item</a></li>
 8          <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
 9          <li class="item-1"><a href="link4.html">fourth item</a></li>
10          <li class="item-0"><a href="link5.html">fifth item</a></li>
11      </ul>
12  </div>
13 """
14 
15 # 建立 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17 
18 # 打開本地 HTML 檔案的方式來建立對象
19 # soup = BeautifulSoup(open('index.html'), "lxml")
20 
21 # 格式化輸出 soup 對象的内容
22 print(soup.prettify())      

運作結果:

1 <html>
 2  <body>
 3   <div>
 4    <ul>
 5     <li class="item-0">
 6      <a href="link1.html">
 7       first item
 8      </a>
 9     </li>
10     <li class="item-1">
11      <a href="link2.html">
12       second item
13      </a>
14     </li>
15     <li class="item-inactive">
16      <a href="link3.html">
17       <span class="bold">
18        third item
19       </span>
20      </a>
21     </li>
22     <li class="item-1">
23      <a href="link4.html">
24       fourth item
25      </a>
26     </li>
27     <li class="item-0">
28      <a href="link5.html">
29       fifth item
30      </a>
31     </li>
32    </ul>
33   </div>
34  </body>
35 </html>      

四大對象種類

Beautiful Soup将複雜HTML文檔轉換成一個複雜的樹形結構,每個節點都是Python對象,所有對象可以歸納為4種:

  • Tag
  • NavigableString
  • BeautifulSoup
  • Comment

1. Tag

Tag 通俗點講就是 HTML 中的一個個标簽,例如:

1 from bs4 import BeautifulSoup
 2 
 3 html = """
 4 <div>
 5     <ul>
 6          <li class="item-0"><a href="link1.html">first item</a></li>
 7          <li class="item-1"><a href="link2.html">second item</a></li>
 8          <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
 9          <li class="item-1"><a href="link4.html">fourth item</a></li>
10          <li class="item-0"><a href="link5.html">fifth item</a></li>
11      </ul>
12  </div>
13 """
14 
15 # 建立 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17 
18 print(soup.li)  # <li class="item-0"><a href="link1.html">first item</a></li>
19 print(soup.a)  # <a href="link1.html">first item</a>
20 print(soup.span)  # <span class="bold">third item</span>
21 print(soup.p)  # None
22 print(type(soup.li))  # <class 'bs4.element.Tag'>      

我們可以利用 soup 加标簽名輕松地擷取這些标簽的内容,這些對象的類型是

bs4.element.Tag

。但是注意,它查找的是在所有内容中的第一個符合要求的标簽。如果要查詢所有的标簽,後面會進行介紹。

對于 Tag,它有兩個重要的屬性,是 name 和 attrs
1 from bs4 import BeautifulSoup
 2 
 3 html = """
 4 <div>
 5     <ul>
 6          <li class="item-0"><a href="link1.html">first item</a></li>
 7          <li class="item-1"><a href="link2.html">second item</a></li>
 8          <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
 9          <li class="item-1"><a href="link4.html">fourth item</a></li>
10          <li class="item-0"><a href="link5.html">fifth item</a></li>
11      </ul>
12  </div>
13 """
14 
15 # 建立 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17 
18 print(soup.li.attrs)  # {'class': ['item-0']}
19 print(soup.li["class"])  # ['item-0']
20 print(soup.li.get('class'))  # ['item-0']
21 
22 print(soup.li)  # <li class="item-0"><a href="link1.html">first item</a></li>
23 soup.li["class"] = "newClass"  # 可以對這些屬性和内容等等進行修改
24 print(soup.li)  # <li class="newClass"><a href="link1.html">first item</a></li>
25 
26 del soup.li['class']  # 還可以對這個屬性進行删除
27 print(soup.li)  # <li><a href="link1.html">first item</a></li>      

2. NavigableString

既然我們已經得到了标簽的内容,那麼問題來了,我們要想擷取标簽内部的文字怎麼辦呢?很簡單,用 .string 即可,例如

1 from bs4 import BeautifulSoup
 2 
 3 html = """
 4 <div>
 5     <ul>
 6          <li class="item-0"><a href="link1.html">first item</a></li>
 7          <li class="item-1"><a href="link2.html">second item</a></li>
 8          <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
 9          <li class="item-1"><a href="link4.html">fourth item</a></li>
10          <li class="item-0"><a href="link5.html">fifth item</a></li>
11      </ul>
12  </div>
13 """
14 
15 # 建立 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17 
18 print(soup.li.string)  # first item
19 print(soup.a.string)  # first item
20 print(soup.span.string)  # third item
21 # print(soup.p.string)  # AttributeError: 'NoneType' object has no attribute 'string'
22 print(type(soup.li.string))  # <class 'bs4.element.NavigableString'>      

3. BeautifulSoup

BeautifulSoup 對象表示的是一個文檔的内容。大部分時候,可以把它當作 Tag 對象,是一個特殊的 Tag,我們可以分别擷取它的類型,名稱,以及屬性來感受一下

1 from bs4 import BeautifulSoup
 2 
 3 html = """
 4 <div>
 5     <ul>
 6          <li class="item-0"><a href="link1.html">first item</a></li>
 7          <li class="item-1"><a href="link2.html">second item</a></li>
 8          <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
 9          <li class="item-1"><a href="link4.html">fourth item</a></li>
10          <li class="item-0"><a href="link5.html">fifth item</a></li>
11      </ul>
12  </div>
13 """
14 
15 # 建立 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17 
18 print(soup.name)  # [document]
19 print(soup.attrs)  # {}, 文檔本身的屬性為空
20 print(type(soup.name))  # <class 'str'>      

4. Comment

Comment 對象是一個特殊類型的 NavigableString 對象,其輸出的内容不包括注釋符号。

1 from bs4 import BeautifulSoup
 2 
 3 html = """
 4 <div>
 5    <a class="sister" href="http://example.com/elsie" id="link1"><!-- Elsie --></a>
 6  </div>
 7 """
 8 
 9 # 建立 Beautiful Soup 對象
10 soup = BeautifulSoup(html, "lxml")
11 
12 print(soup.a)  # <a class="sister" href="http://example.com/elsie" id="link1"><!-- Elsie --></a>
13 print(soup.a.string)  # Elsie 
14 print(type(soup.a.string))  # <class 'bs4.element.Comment'>      

a 标簽裡的内容實際上是注釋,但是如果我們利用 .string 來輸出它的内容時,注釋符号已經去掉了。

周遊文檔樹

1. 直接子節點 :

.contents

.children

 屬性

.content

tag 的 .content 屬性可以将tag的子節點以清單的方式輸出,輸出方式為清單,我們可以用清單索引來擷取它的某一個元素

1 from bs4 import BeautifulSoup
 2 
 3 html = """
 4 <div>
 5     <ul>
 6          <li class="item-0"><a href="link1.html">first item</a></li>
 7          <li class="item-1"><a href="link2.html">second item</a></li>
 8          <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
 9          <li class="item-1"><a href="link4.html">fourth item</a></li>
10          <li class="item-0"><a href="link5.html">fifth item</a></li>
11      </ul>
12  </div>
13 """
14 
15 # 建立 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17 
18 print(soup.li.contents)  # [<a href="link1.html">first item</a>]
19 print(soup.li.contents[0])  # <a href="link1.html">first item</a>      

.children

它傳回的不是一個 list,不過我們可以通過周遊擷取所有子節點。

我們列印輸出 .children 看一下,可以發現它是一個 list 生成器對象

1 from bs4 import BeautifulSoup
 2 
 3 html = """
 4 <div>
 5     <ul>
 6          <li class="item-0"><a href="link1.html">first item</a></li>
 7          <li class="item-1"><a href="link2.html">second item</a></li>
 8          <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
 9          <li class="item-1"><a href="link4.html">fourth item</a></li>
10          <li class="item-0"><a href="link5.html">fifth item</a></li>
11      </ul>
12  </div>
13 """
14 
15 # 建立 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17 
18 print(soup.ul.children)  # <list_iterator object at 0x106388a20>
19 for child in  soup.ul.children:
20     print(child)      

輸出結果:

1 <li class="item-0"><a href="link1.html">first item</a></li>
 2 
 3 
 4 <li class="item-1"><a href="link2.html">second item</a></li>
 5 
 6 
 7 <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
 8 
 9 
10 <li class="item-1"><a href="link4.html">fourth item</a></li>
11 
12 
13 <li class="item-0"><a href="link5.html">fifth item</a></li>      

2. 所有子孫節點: 

.descendants

 屬性

.contents 和 .children 屬性僅包含tag的直接子節點,.descendants 屬性可以對所有tag的子孫節點進行遞歸循環,和 children類似,我們也需要周遊擷取其中的内容。

1 for child in  soup.ul.descendants:
2     print(child)      

運作結果:

1 <li class="item-0"><a href="link1.html">first item</a></li>
 2 <a href="link1.html">first item</a>
 3 first item
 4 
 5 
 6 <li class="item-1"><a href="link2.html">second item</a></li>
 7 <a href="link2.html">second item</a>
 8 second item
 9 
10 
11 <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
12 <a href="link3.html"><span class="bold">third item</span></a>
13 <span class="bold">third item</span>
14 third item
15 
16 
17 <li class="item-1"><a href="link4.html">fourth item</a></li>
18 <a href="link4.html">fourth item</a>
19 fourth item
20 
21 
22 <li class="item-0"><a href="link5.html">fifth item</a></li>
23 <a href="link5.html">fifth item</a>
24 fifth item      

搜尋文檔樹

1.

find_all(name, attrs, recursive, text, **kwargs)

1)name 參數

name 參數可以查找所有名字為 name 的 tag,字元串對象會被自動忽略掉

A.傳字元串

最簡單的過濾器是字元串.在搜尋方法中傳入一個字元串參數,Beautiful Soup會查找與字元串完整比對的内容,下面的例子用于查找文檔中所有的

<span>

标簽:

1 from bs4 import BeautifulSoup
 2 
 3 html = """
 4 <div>
 5     <ul>
 6          <li class="item-0"><a href="link1.html">first item</a></li>
 7          <li class="item-1"><a href="link2.html">second item</a></li>
 8          <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
 9          <li class="item-1"><a href="link4.html">fourth item</a></li>
10          <li class="item-0"><a href="link5.html">fifth item</a></li>
11      </ul>
12  </div>
13 """
14 
15 # 建立 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17 print(soup.find_all('span'))  # [<span class="bold">third item</span>]      
B.傳正規表達式

如果傳入正規表達式作為參數,Beautiful Soup會通過正規表達式的 match() 來比對内容.下面例子中找出所有以 s 開頭的标簽,這表示 

<span

标簽都應該被找到

1 from bs4 import BeautifulSoup
 2 import re
 3 
 4 html = """
 5 <div>
 6     <ul>
 7          <li class="item-0"><a href="link1.html">first item</a></li>
 8          <li class="item-1"><a href="link2.html">second item</a></li>
 9          <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
10          <li class="item-1"><a href="link4.html">fourth item</a></li>
11          <li class="item-0"><a href="link5.html">fifth item</a></li>
12      </ul>
13  </div>
14 """
15 
16 # 建立 Beautiful Soup 對象
17 soup = BeautifulSoup(html, "lxml")
18 for tag in soup.find_all(re.compile("^s")):
19     print(tag)
20 # <span class="bold">third item</span>      
C.傳清單

如果傳入清單參數,Beautiful Soup會将與清單中任一進制素比對的内容傳回.下面代碼找到文檔中所有 

<a> 

标簽和 

<span> 

标簽:

1 from bs4 import BeautifulSoup
 2 
 3 html = """
 4 <div>
 5     <ul>
 6          <li class="item-0"><a href="link1.html">first item</a></li>
 7          <li class="item-1"><a href="link2.html">second item</a></li>
 8          <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
 9          <li class="item-1"><a href="link4.html">fourth item</a></li>
10          <li class="item-0"><a href="link5.html">fifth item</a></li>
11      </ul>
12  </div>
13 """
14 
15 # 建立 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17 print(soup.find_all(["a", "span"]))
18 # [<a href="link1.html">first item</a>, <a href="link2.html">second item</a>, <a href="link3.html"><span class="bold">third item</span></a>, <span class="bold">third item</span>, <a href="link4.html">fourth item</a>, <a href="link5.html">fifth item</a>]      

2)keyword 參數

1 from bs4 import BeautifulSoup
 2 
 3 html = """
 4 <div>
 5     <ul>
 6          <li class="item-0"><a href="link1.html">first item</a></li>
 7          <li class="item-1"><a href="link2.html">second item</a></li>
 8          <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
 9          <li class="item-1"><a href="link4.html">fourth item</a></li>
10          <li class="item-0"><a href="link5.html">fifth item</a></li>
11      </ul>
12  </div>
13 """
14 
15 # 建立 Beautiful Soup 對象
16 soup = BeautifulSoup(html, "lxml")
17 print(soup.find_all(href='link1.html'))  # [<a href="link1.html">first item</a>]      

3)text 參數

通過 text 參數可以搜搜文檔中的字元串内容,與 name 參數的可選值一樣, text 參數接受 字元串 , 正規表達式 , 清單

1 from bs4 import BeautifulSoup
 2 import re
 3 
 4 html = """
 5 <div>
 6     <ul>
 7          <li class="item-0"><a href="link1.html">first item</a></li>
 8          <li class="item-1"><a href="link2.html">second item</a></li>
 9          <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
10          <li class="item-1"><a href="link4.html">fourth item</a></li>
11          <li class="item-0"><a href="link5.html">fifth item</a></li>
12      </ul>
13  </div>
14 """
15 
16 # 建立 Beautiful Soup 對象
17 soup = BeautifulSoup(html, "lxml")
18 print(soup.find_all(text="first item"))  # ['first item']
19 print(soup.find_all(text=["first item", "second item"]))  # ['first item', 'second item']
20 print(soup.find_all(text=re.compile("item")))  # ['first item', 'second item', 'third item', 'fourth item', 'fifth item']      

CSS選擇器

這就是另一種與 find_all 方法有異曲同工之妙的查找方法.

  • 寫 CSS 時,标簽名不加任何修飾,類名前加

    .

    ,id名前加

    #

  • 在這裡我們也可以利用類似的方法來篩選元素,用到的方法是 

    soup.select()

    ,傳回類型是 

    list

(1)通過标簽名查找

1 from bs4 import BeautifulSoup
 2 import re
 3 
 4 html = """
 5 <div>
 6     <ul>
 7          <li class="item-0"><a href="link1.html">first item</a></li>
 8          <li class="item-1"><a href="link2.html">second item</a></li>
 9          <li class="item-inactive"><a href="link3.html"><span class="bold">third item</span></a></li>
10          <li class="item-1"><a href="link4.html">fourth item</a></li>
11          <li class="item-0"><a href="link5.html">fifth item</a></li>
12      </ul>
13  </div>
14 """
15 
16 # 建立 Beautiful Soup 對象
17 soup = BeautifulSoup(html, "lxml")
18 print(soup.select('span'))  # [<span class="bold">third item</span>]      

(2)通過類名查找

1 print(soup.select('.item-0'))  
2 # [<li class="item-0"><a href="link1.html">first item</a></li>, <li class="item-0"><a href="link5.html">fifth item</a></li>]      

(3)通過 id 名查找

print(soup.select('#item-0'))  # []      

(4)組合查找

1 print(soup.select('li.item-0'))
2 # [<li class="item-0"><a href="link1.html">first item</a></li>, <li class="item-0"><a href="link5.html">fifth item</a></li>]
3 print(soup.select('li.item-0>a')) 
4 # [<a href="link1.html">first item</a>, <a href="link5.html">fifth item</a>]      

(5)屬性查找

1 print(soup.select('a[href="link1.html"]'))  # [<a href="link1.html">first item</a>]      

  (6) 擷取内容

1 for text in soup.select('li'):
2     print(text.get_text())
3 """
4 first item
5 second item
6 third item
7 fourth item
8 fifth item
9 """