天天看點

Python爬蟲練習四-scrapy架構練手

前言:

爬蟲架構scrapy學習筆記,練手-爬取醜事百科 

配置

conda 安裝

conda install -c conda-forge scrapy

pip install Scrap      

scrapy依賴一些相關的庫:

lxml

parsel

w3lib

twisted

cryptography and pyOpenSSL

生成初始化項目檔案:

cmd 到指定目錄:

d:

cd Python/

scrapy startproject qiushibaike
      
Python爬蟲練習四-scrapy架構練手

實作代碼:

具體問題具體分析:

與教材的位址明顯不一樣了。

Python爬蟲練習四-scrapy架構練手

在spiders檔案下:

import scrapy
class QiushiSpider(scrapy.Spider):
name = "qiushibaike"
haders = {
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/73.0.3683.86 Chrome/73.0.3683.86 Safari/537.36'
    }


def start_requests(self):
urls = [
'https://www.qiushibaike.com/text/page/12/',
'https://www.qiushibaike.com/text/page/2/',
        ]
for url in urls:
yield scrapy.Request(url=url, callback=self.parse, headers=self.haders)


def parse(self, response):
content_left_div = response.xpath('//div[@class="article block untagged mb15 typs_long"]')
for content_div in content_left_div:
yield {
'author': content_div.xpath('//div[@class="author clearfix"]/a[2]/h2/text()').get(),
'content': content_div.xpath('//a[@class="contentHerf"]/div/span/text()').getall(),
            }      

繼續閱讀