天天看點

爬取全部的校園新聞

題目:

1.從新聞url擷取新聞詳情: 字典,anews

2.從清單頁的url擷取新聞url:清單append(字典) alist

3.生成所頁清單頁的url并擷取全部新聞 :清單extend(清單) allnews

*每個同學爬學号尾數開始的10個清單頁

4.設定合理的爬取間隔

import time

import random

time.sleep(random.random()*3)

5.用pandas做簡單的資料處理并儲存

儲存到csv或excel檔案 

newsdf.to_csv(r'F:\duym\爬蟲\gzccnews.csv')

儲存到資料庫

import sqlite3

with sqlite3.connect('gzccnewsdb.sqlite') as db:

    newsdf.to_sql('gzccnewsdb',db)

import requests
from bs4 import BeautifulSoup
from datetime import datetime
import re
import pandas as pd
import time
import random
import sqlite3


def click(url):
    id = re.findall('(\d{1,5})', url)[-1]
    clickUrl = 'http://oa.gzcc.cn/api.php?op=count&id={}&modelid=80'.format(id)
    resClick = requests.get(clickUrl)
    newsClick = int(resClick.text.split('.html')[-1].lstrip("('").rstrip("');"))
    return newsClick


def newsdt(showinfo):
    newsDate = showinfo.split()[0].split(':')[1]
    newsTime = showinfo.split()[1]
    newsDT = newsDate + ' ' + newsTime
    dt = datetime.strptime(newsDT, '%Y-%m-%d %H:%M:%S')
    return dt


def anews(url):  # 從新聞url擷取新聞詳情: 字典,anews
    newsDetail = {}
    res = requests.get(url)
    res.encoding = 'utf-8'
    soup = BeautifulSoup(res.text, 'html.parser')
    newsDetail['newsTitle'] = soup.select('.show-title')[0].text
    showinfo = soup.select('.show-info')[0].text
    newsDetail['newsDT'] = newsdt(showinfo)
    newsDetail['newsClick'] = click(newsUrl)
    return newsDetail


def alist(url):  # 從清單頁的url擷取新聞url:清單append(字典) alist
    res = requests.get(listUrl)
    res.encoding = 'utf-8'
    soup = BeautifulSoup(res.text, 'html.parser')
    newsList = []
    for news in soup.select('li'):
        if len(news.select('.news-list-title')) > 0:
            newsUrl = news.select('a')[0]['href']
            newsDesc = news.select('.news-list-description')[0].text
            newsDict = anews(newsUrl)
            newsDict['description'] = newsDesc
            newsList.append(newsDict)
    return newsList


newsUrl = 'http://news.gzcc.cn/html/2005/xiaoyuanxinwen_0710/4.html'
listUrl = 'http://news.gzcc.cn/html/xiaoyuanxinwen/'
alist(listUrl)
alist(newsUrl)
res = requests.get('http://news.gzcc.cn/html/xiaoyuanxinwen/')
res.encoding = 'utf-8'
soup = BeautifulSoup(res.text, 'html.parser')

for news in soup.select('li'):
    if len(news.select('.news-list-title')) > 0:
        newsUrl = news.select('a')[0]['href']
        print(anews(newsUrl))

allnews = []
for i in range(97, 107):  # 爬取學号尾數開始的10個清單頁
    listUrl = 'http://news.gzcc.cn/html/xiaoyuanxinwen/{}.html'.format(i)
    allnews.extend(alist(listUrl))

print("allnewsLength={}".format(len(allnews)))
print(allnews)

res = requests.get('http://news.gzcc.cn/html/xiaoyuanxinwen/')
res.encoding = 'utf-8'
soup = BeautifulSoup(res.text, 'html.parser')
for news in soup.select('li'):
    if len(news.select('.news-list-title')) > 0:
        newsUrl = news.select('a')[0]['href']
        print(anews(newsUrl))

s1 = pd.Series([100, 23, 'bugingcode'])
print(s1)
pd.Series(anews)
newsdf = pd.DataFrame(allnews)
for i in range(5):
    print(i)
    time.sleep(random.random() * 3)  # 設定爬取的時間間隔
    print(newsdf)

newsdf.to_csv(r'D:\Download\gzcc.csv', encoding='utf_8_sig')  # 儲存成csv格式,為避免亂碼,設定編碼格式為utf_8_sig

with sqlite3.connect(r'D:\Download\gzccnewsdb2.sqlite') as db:  # 儲存檔案為sql
    newsdf.to_sql('gzccnewsdb2', db)      

效果

爬取全部的校園新聞

儲存的檔案

爬取全部的校園新聞

css打開的結果

爬取全部的校園新聞