相信很多朋友在看一部電影的時候喜歡先去豆瓣找一下網友對該片的評價。豆瓣作為國內最權威的電影評分網站,雖然有不少水軍和精日精美分子,但是TOP250的電影還是不錯的,值得一看。
爬取目標
本文將爬取豆瓣電影 TOP250 排行榜的電影名稱、時間、主演和評分等信息,爬去的結果我們將以 excel 格式存儲下來。
爬取分析
打開豆瓣電影 TOP250 我們會發現榜單主要顯示電影名、主演、上映時間和評分。
通過對網頁源碼的分析我們發現電影的標題在
提取首頁信息
def find_movies(res):
soup = bs4.BeautifulSoup(res.text, 'html.parser')
# 電影名
movies = []
targets = soup.find_all("div", class_="hd")
for each in targets:
movies.append(each.a.span.text)
# 評分
ranks = []
targets = soup.find_all("span", class_="rating_num")
for each in targets:
ranks.append(each.text)
# 資料
messages = []
targets = soup.find_all("div", class_="bd")
for each in targets:
try:
messages.append(each.p.text.split('\n')[1].strip() + each.p.text.split('\n')[2].strip())
except:
continue
result = []
length = len(movies)
for i in range(length):
result.append([movies[i], ranks[i], messages[i]])
return result
分頁爬取
我們需要爬去的數據是 TOP100 的電影,所以我們需要獲取他所有頁面的數據
def find_depth(res):
soup = bs4.BeautifulSoup(res.text, 'html.parser')
depth = soup.find('span', class_='next').previous_sibling.previous_sibling.text
return int(depth)
寫入文件
def save_to_excel(result):
wb = openpyxl.Workbook()
ws = wb.active
ws['A1'] = "電影名稱"
ws['B1'] = "評分"
ws['C1'] = "資料"
for each in result:
ws.append(each)
wb.save("豆瓣TOP250電影.xlsx")
整理代碼
import requests
import bs4
import openpyxl
def open_url(url):
# 使用代理
# proxies = {"http": "127.0.0.1:1080", "https": "127.0.0.1:1080"}
headers = {
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36'}
# res = requests.get(url, headers=headers, proxies=proxies)
res = requests.get(url, headers=headers)
return res
def find_movies(res):
soup = bs4.BeautifulSoup(res.text, 'html.parser')
# 電影名
movies = []
targets = soup.find_all("div", class_="hd")
for each in targets:
movies.append(each.a.span.text)
# 評分
ranks = []
targets = soup.find_all("span", class_="rating_num")
for each in targets:
ranks.append(each.text)
# 資料
messages = []
targets = soup.find_all("div", class_="bd")
for each in targets:
try:
messages.append(each.p.text.split('\n')[1].strip() + each.p.text.split('\n')[2].strip())
except:
continue
result = []
length = len(movies)
for i in range(length):
result.append([movies[i], ranks[i], messages[i]])
return result
# 找出一共有多少個頁面
def find_depth(res):
soup = bs4.BeautifulSoup(res.text, 'html.parser')
depth = soup.find('span', class_='next').previous_sibling.previous_sibling.text
return int(depth)
def save_to_excel(result):
wb = openpyxl.Workbook()
ws = wb.active
ws['A1'] = "電影名稱"
ws['B1'] = "評分"
ws['C1'] = "資料"
for each in result:
ws.append(each)
wb.save("豆瓣TOP250電影.xlsx")
def main():
host = "https://movie.douban.com/top250"
res = open_url(host)
depth = find_depth(res)
result = []
for i in range(depth):
url = host + '/?start=' + str(25 * i)
res = open_url(url)
result.extend(find_movies(res))
福利以及下期預告
私信回覆 python 即可獲得 全套 python 資料。下期我將演示如何爬取百度文庫 VIP 文章以及令人噴血的性感美女圖片(我已經存了幾個 G 的圖片,最近感覺營養有點跟不上)
閱讀更多 waspvae 的文章