天天看點

關于拉勾網的scrapy crawlspider爬蟲出現的302問題的解決方式

關于拉勾網的爬蟲,課程上講解的視訊在正在執行的時候會出現:DEBUG: Redirecting (302) to <GET https://passport.lagou.com/login/login.html?msg=validation&uStatus=2&clientIp=202.113.176.54> from <GET https://www.lagou.com/jobs/3574552.html>,這個302錯誤,查找了一些别人的部落格https://blog.csdn.net/qq_26582987/article/details/79703317上面的相關的解決方式,即加上在每個請求上加上cookies和headers即可,但是在作者的代碼上出現

  def start_requests(self):

        self.cookies = selenium_login.login_lagou()

        print (type(self.cookies))

        print(self.headers)

        yield Request(url=self.start_urls[0],

                             cookies=self.cookies,

                             headers=self.headers,

                             callback=self.parse,

                             dont_filter=True)

在crawlspider中實作登陸,有時驗證碼複雜,還沒輸入完畢就出現頁面的自動跳轉,如果要爬蟲多次,則多要多次登陸,

此時,可以将一次登陸之後的結果儲存到json檔案中,後續的登陸直接讀取這個json檔案即可,附上代碼

LoginLaGou.json

if __name__ == "__main__":
    with open("cookies.json", "r", encoding='utf-8') as f:
        # indent 超級好用,格式化儲存字典,預設為None,小于0為零個空格
        #f.write(json.dumps(login_lagou(), indent=4))
         print(json.dumps(f.read()))      

在crawlspider中

def start_requests(self):

     #讀取cookies.json檔案
    with open(os.path.join(os.path.dirname(__file__,),"cookies.json"), "r", encoding='utf-8') as f:
        self.cookies=json.loads(f.read())

    self.myheaders = {
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
        'Accept-Encoding': 'gzip, deflate, br',
        'Accept-Language': 'zh-CN,zh;q=0.9',
        'Connection': 'keep-alive',
        'Host': 'www.lagou.com',
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36",
    }
    yield scrapy.Request(url=self.start_urls[0], cookies=self.cookies, headers=self.myheaders, callback=self.parse,
                         dont_filter=True)s      

繼續閱讀