天天看点

scrapy ImagesPipeline根据关键字下载百度图片到本地scrapy框架一、scrapy的图片下载-ImagesPipeline二、根据关键字下载百度图片到本地本篇小结

scrapy ImagesPipeline根据关键字下载百度图片到本地

  • scrapy框架
  • 一、scrapy的图片下载-ImagesPipeline
  • 二、根据关键字下载百度图片到本地
    • 1.构造百度图片请求,解析图片URL
    • 2.ImagesPipeline下载图片到本地
  • 本篇小结

scrapy框架

scrapy框架是一个多线程爬虫框架,是可以集请求、解析、存储于一体的爬虫框架,关于框架简介和重要的组件可以参考:

零基础scrapy项目结构简介-python批量获取百度图片到本地

下面主要以百度图片下载并保存到本地为例,介绍scrapy爬取图片到本地的方法

一、scrapy的图片下载-ImagesPipeline

ImagesPipeline是scrapy提供的图片下载类。我们可以定义pipeline来继承ImagesPipeline来实现自定义的图片下载

在scrapy的源码中(源码地址:https://github.com/scrapy/scrapy/tree/master/scrapy),pipelines文件夹下有三个python文件:files.py ,images.py 和 media.py。当我们选择使用ImagesPipeline来处理图片时,主要用到了这三个python所定义的方法

scrapy ImagesPipeline根据关键字下载百度图片到本地scrapy框架一、scrapy的图片下载-ImagesPipeline二、根据关键字下载百度图片到本地本篇小结

有关图片请求和保存到本地的方法可以参见这三个python,其中images.py中的一些方法是我们可以重写的(此外还有一个小知识点,ImagesPipeline其实继承了FilesPipeline,具体可参考images.py,细节将在下一篇文章中进行介绍)

二、根据关键字下载百度图片到本地

1.构造百度图片请求,解析图片URL

初始化定义spider的name和allowed_domains以及关键字

image_spider.py

import scrapy
import json
from baidu_crawler.items import BaiduCrawlerItem


class ImageSpiderSpider(scrapy.Spider):
    name = 'image_spider'
    allowed_domains = ['image.baidu.com']
    key = r'猫'
           

构造初始请求获取到百度提供的图片总数,为之后翻页查询做准备。这里Request有三个参数,url是要获取的图片链接,callback是请求后的回调函数,即在parse方法里可以对请求结果进行解析,dont_filter=False表示对url进行去重。关于Request函数的参数和详细定义可以参照

https://github.com/scrapy/scrapy/blob/be655b855da3f5643b004e9f2d5b9161266c17f4/scrapy/http/request/init.py

image_spider.py

def start_requests(self):
    url = 'http://image.baidu.com/search/acjson?tn=resultjson_com&ipn=rj&queryWord={word}&word={word}&pn=0'.format(word=self.key)
    yield scrapy.Request(url=url, callback=self.parse, dont_filter=False)
           

然后解析请求返回的结果,获取到图片总数,根据图片数来翻页请求所有可以获得的图片

image_spider.py

def parse(self, response):
    # 获取百度可以提供给用户的图片数目
    baidu_page_num = json.loads(response.body)['listNum']
    start_urls = [
        'http://image.baidu.com/search/acjson?tn=resultjson_com&ipn=rj&queryWord={word}&word={word}&pn={pn}'.format(
            word=self.key, pn=str(i)) for i in range(0, baidu_page_num, 30)]

    for url in start_urls:
        yield scrapy.Request(url=url, headers=self.default_headers, callback=self.parse_two, dont_filter=False)
           

对于上面翻页请求的百度图片信息,解析返回结果,获取到所有图片的url,并用在item.py中定义的数据项存储

image_spider.py

def parse_two(self, response):
    item = BaiduCrawlerItem()
    image_urls = []
    imageData = json.loads(response.body)['data']
    for image in imageData:
        image_urls.append(image['thumbURL'])

    item['image_urls'] = image_urls
    yield item
           

其中BaiduCrawlerItem定义如下:

items.py

import scrapy
class BaiduCrawlerItem(scrapy.Item):
    image_urls = scrapy.Field()
           

2.ImagesPipeline下载图片到本地

项目的pipeline继承ImagesPipeline,重写图片下载有关方法

pipelines.py

import os
from baidu_crawler.settings import IMAGES_STORE
from urllib.request import urlopen, Request, urlretrieve
from scrapy.exceptions import DropItem
from scrapy import Request
from scrapy.pipelines.images import ImagesPipeline

class BaiduCrawlerDownloadPipeline(ImagesPipeline):
    # 根据从spider中传递过来的图片url对网络中的图片(百度图片)发出请求,ImagesPipeline进行图片下载保存
    def get_media_requests(self, item):
        image_urls = item['image_urls']

        for image_url in image_urls:
            yield Request(image_url)

    def item_completed(self, results, item, info):
        try:
            image_paths = [x['path'] for ok, x in results if ok]
            if not image_paths:
                raise DropItem("Item contains no images")
            item['image_paths'] = IMAGES_STORE.join(item['image_urls'])
        except Exception as e:
            # logging.error("Exception: {}".format(e))
            print(("Exception: {}".format(e)))
        return item
    # 重命名图片    
    def file_path(self, request, response=None, info=None):
        # 这里用图片url的一部分进行命名
        # 例如https://ss1.bdstatic.com/70cFvXSh_Q1YnxGkpoWK1HF6hhy/it/u=1340038759,2253650778&fm=26&gp=0.jpg,选择u=1340038759,2253650778&fm=26&gp=0.jpg作为图片名
        # 也可以自定义图片名,例如用图片url的hash作为图片名,scrapy中是将整张图片的md5值作为图片名
        name_array = response.url.spilt('/')
        image_name = name_array[len(name_array)-1]
        image_path = IMAGES_STORE + image_name
        return image_path

           

最后再在配置文件对pipeline进行配置

settings.py中的重要配置:

settings.py

# Obey robots.txt rules
ROBOTSTXT_OBEY = False

DOWNLOADER_MIDDLEWARES = {
   'crawler.middlewares.BaiduCrawlerDownloaderMiddleware': 543,
}
ITEM_PIPELINES = {
   'crawler.pipelines.BaiduCrawlerDownloadPipeline': 100,
   'scrapy.pipelines.images.ImagesPipeline': 102,
   'scrapy.pipelines.files.FilesPipeline': 103
}
# 图片存储的目录
IMAGES_STORE = './data/'
           

如果需要用到代理,则在middlewares.py中配置request代理

middlewares.py

class BaiduCrawlerDownloaderMiddleware(object):
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the downloader middleware does not modify the
    # passed objects.

    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s

    def process_request(self, request, spider):
        # Called for each request that goes through the downloader
        # middleware.
        current_ip = '你的代理ip: port'
        current_ip = 'http://' + current_ip 
        request.meta['proxy'] = current_ip
        return None

    def process_response(self, request, response, spider):
        # Called with the response returned from the downloader.
        return response

    def process_exception(self, request, exception, spider):
        # Called when a download handler or a process_request()
        # (from other downloader middleware) raises an exception.
        pass

    def spider_opened(self, spider):
        spider.logger.info('Spider opened: %s' % spider.name)
           

本篇小结

1.在spider中发送请求并解析响应结果

2.从响应结果中提取出图片url

3.通过定义的数据项item,将图片url发送给pipeline

4.pipeline继承ImagesPipeline,按照需求重写图片请求和命名等方法

以上就是本篇要讲的内容,事实上ImagesPipeline中的图片下载的最终方法也可以通过重写的方式自定义,和我们正常使用requests库一样,scrapy中也是通过在本地open一个新文件然后以’wb’的方式将请求得到的图片write到该文件的方式来下载图片的,详细方法将在下一篇进行介绍。

继续阅读