requests項目實戰–抓取貓眼電影排行
- 2020 年 1 月 21 日
- 筆記
requests項目實戰–抓取貓眼電影排行
目標 url : https://maoyan.com/board/4?offset=0
提取出貓眼電影TOP100的電影名稱,主演,上映時間,評分,圖片等資訊,提取的結果以文本的形式保存起來。
環境:安裝requests庫,lxml–xpath解析
pip3 install requests
pip3 install lxml
抓取分析:
offset為偏移量,一共10頁,每頁10部電影,offset=90為最後一頁,offset每次+=10則是下一頁的url地址。

xpath內容提取:
獲取每一頁的所有電影名:
//p[@class='name']/a/text()

獲取每一頁所有的主演名:
//p[@class='star']/text()

獲取每一頁的所有電影上映時間:
//p[@class='releasetime']/text()

獲取每一頁所有的電影評分
//p[@class='score']/i/text()

獲取每一頁所有電影圖片url地址
//img[@class='board-img']/@src

完整程式碼:
#!/usr/bin/env python # coding: utf-8 import requests from lxml import etree import time import json class Item: movie_name = None # 電影名 to_star = None # 主演 release_time = None # 上映時間 score = None # 評分 picture_address = None # 圖片地址 class GetMaoYan: def get_html(self, url): try: headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36' } response = requests.get(url=url, headers=headers) if response.status_code == 200: return response.text return None except Exception: return None def get_content(self, html): items = [] # normalize-space 去空格,換行符 content = etree.HTML(html) all_list = content.xpath("//dl[@class='board-wrapper']/dd") for i in all_list: item = Item() item.movie_name = i.xpath("normalize-space(.//p[@class='name']/a/text())") item.to_star = i.xpath("normalize-space(.//p[@class='star']/text())") item.release_time = i.xpath("normalize-space(.//p[@class='releasetime']/text())") x, y = i.xpath(".//p[@class='score']/i/text()") item.score = x + y item.picture_address = i.xpath("normalize-space(./a/img[@class='board-img']/@data-src)") items.append(item) return items def write_to_txt(self, items): content_dict = { 'movie_name': None, 'to_star': None, 'release_time': None, 'score': None, 'picture_address': None } with open('result.txt', 'a', encoding='utf-8') as f: for item in items: content_dict['movie_name'] = item.movie_name content_dict['to_star'] = item.to_star content_dict['release_time'] = item.release_time content_dict['score'] = item.score content_dict['picture_address'] = item.picture_address print(content_dict) f.write(json.dumps(content_dict, ensure_ascii=False) + 'n') def main(self, offset): url = 'https://maoyan.com/board/4?offset=' + str(offset) html = self.get_html(url) items = self.get_content(html) self.write_to_txt(items) if __name__ == '__main__': st = GetMaoYan() for i in range(10): st.main(offset=i*10) time.sleep(1)
運行結果:

文本結果:
