天天看點

用scrapy自動爬蟲架構爬取糗事百科的段子

1.安裝好scrapy架構 

2.運作cmd  建立項目檔案 scrapy startproject  qsauto(qsauto是項目名)

3.輸入scrapy genspider -t crawl weisuen qiushibaike.com  (crawl是自動爬蟲類型,weisuen是爬蟲名,qiushibaike.com是域名)

4.把setting.py中lines 22 代碼前得注釋去掉,把True改為False

ROBOTSTXT_OBEY = False
           

 5.把setting.py中lines 12,14,15代碼的前的注釋都去掉、

BOT_NAME = 'qsauto'

SPIDER_MODULES = ['qsauto.spiders']
NEWSPIDER_MODULE = 'qsauto.spiders'
           

6.在setting.py中将這行lines 19 的代碼的注釋去掉,并加上User-Agent資訊,模拟人工浏覽器登陸,為後續爬蟲持續運作做基礎 

USER_AGENT="Mozilla/5.0 (Windows NT 6.1;WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36"
           

 其他完整代碼如下:

items.py(容器)

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html

import scrapy


class QsautoItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    content=scrapy.Field()
    link=scrapy.Field()
           

 weisuen.py(爬蟲檔案)

# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from  scrapy.http import Request
from qsauto.items import QsautoItem

class WeisuenSpider(CrawlSpider):
    name = 'weisuen'
    allowed_domains = ['qiushibaike.com']
    '''
    start_urls = ['http://qiushibaike.com/']
    '''
    rules = (
        Rule(LinkExtractor(allow=r'article'), callback='parse_item', follow=True),
    )   #callback指定回調函數   follow 是否繼續跟進   allow爬取規格 根據網頁規格來爬
    def start_requests(self):
        header={'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36'}

        yield Request("http://qiushibaike.com/",headers=header)
    def parse_item(self, response): #回轉方法
        i=QsautoItem()
        i["content"] = response.xpath('//div[@class="content"]/text()').extract()
        i["link"] = response.xpath('//link[@rel="canonicalz"]/@href').extract()
        #item['domain_id'] = response.xpath('//input[@id="sid"]/@value').get()
        #item['name'] = response.xpath('//div[@id="name"]').get()
        #item['description'] = response.xpath('//div[@id="description"]').get()
        print(i["content"])
        print(i["link"])
        return i
           

 pipelines.py(資訊處理)

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html


class QsautoPipeline(object):
    def process_item(self, item, spider):
        with open('qiushibaike.txt','w') as f:
            f.write(i["content"])
        return item
           

最後,使用scrapy crawl weisuen 運作爬蟲  就完成了