一起來豆瓣看書吧!

  • 2019 年 10 月 6 日
  • 筆記

豆瓣的圖書的書一直是比較全的,最近有的小夥伴想去豆瓣看看IT有關的書籍,說走就走,豆瓣我來了!

首先我們看看我們要爬的網址:

https://www.douban.com

那我們看看電腦相關的書籍:

再看看與深度學習相關的???:

ok,不多說了,我們開始吧!

準備工作:需要導入的包有:(如果沒有的話自行pip安裝吧!)

import importlib  import sys  import time  import urllib  import numpy as np  from bs4 import BeautifulSoup  from openpyxl import Workbook

這裡使用urllib而不用requests的原因是因為 如果使用requests包,IP容易被封。

首先我們要準備一件很重要的事情,多準備幾個header,那header是在哪裡獲取的呢?

我們需要打開開發者模式,選擇Network,在裡面選擇一條請求:

四步走,我們一步一步來:

我們需要多個user-agent來防止反爬,

我們把它都放到header裡面:

hds = [{'User-Agent': 'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6'},         {'User-Agent': 'Mozilla/5.0 (Windows NT 6.2) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.12 Safari/535.11'},         {'User-Agent': 'Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; Trident/6.0)'}]

下面我們開始獲取圖書資訊了:

這裡說明一下,我們要爬沒個頁數的時間採用隨機休眠來控制反爬,

我們先來觀察一下url:

https://www.douban.com/tag/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0/book?start=0

url中固定的是https://www.douban.com/tag/和/book?start=

那下面我們就來拼接url吧:

url = 'http://www.douban.com/tag/'         + urllib.parse.quote(book_tag)         + '/book?start=' + str(page_num * 15)  print(url)

之後我們開去網上獲取數據了:

# 隨機休眠時間,防止反爬  time.sleep(np.random.rand() * 3)  req = urllib.request.Request(url, headers=hds[page_num % len(hds)])  source_code = urllib.request.urlopen(req).read()  plain_text = str(source_code)

拿到數據之後我們使用bs4去匹配我們需要的內容:

soup = BeautifulSoup(plain_text,features="lxml")  list_soup = soup.find('div', {'class': 'mod book-list'})    try_times += 1;  if list_soup == None and try_times < 200:      continue  elif list_soup == None or len(list_soup) <= 1:      break  # 遍歷查找的集合,提取細節資訊  for book_info in list_soup.findAll('dd'):      title = book_info.find('a', {'class': 'title'}).string.strip()      desc = book_info.find('div', {'class': 'desc'}).string.strip()      desc_list = desc.split('/')      book_url = book_info.find('a', {'class': 'title'}).get('href')        try:          author_info = '作者/譯者: ' + '/'.join(desc_list[0:-3])          pub_info = '出版資訊: ' + '/'.join(desc_list[-3:])          rating = book_info.find('span', {'class': 'rating_nums'}).string.strip()          people_num = get_num(book_url)          people_num = people_num.strip('人評價')      except:          author_info = '作者/譯者: 暫無'          pub_info = '出版資訊: 暫無'          rating = '0.0'          people_num = '0'          print('detail info has some error!')        book_list.append([title, rating, people_num, author_info, pub_info])      try_times = 0  page_num += 1

我們還要獲取點評人數的資訊(如果不想要這個欄位可以把people_num注釋掉):

try:      req = urllib.request.Request(url,              headers=hds[np.random.randint(0, len(hds))])      source_code = urllib.request.urlopen(req).read()      plain_text = str(source_code)  except :      print('http error!')  soup = BeautifulSoup(plain_text,features="lxml")  people_num = soup.find('div',                         {'class': 'rating_sum'}).findAll(                          'span')[1].string.strip()

根據給定標籤獲取所有的書:

book_lists = []  book_tag_lists = ['電腦',                    '機器學習',                    'linux',                    'android',                    '資料庫',                    '互聯網']  for book_tag in book_tag_lists:      book_list = book_info(book_tag)      book_list = sorted(book_list, key=lambda x: x[1], reverse=True)      book_lists.append(book_list)

最後一步,我們將獲取到的書的資訊存到Excel里:

wb = Workbook(optimized_write=True)  ws = []  for i in range(len(book_tag_lists)):      ws.append(wb.create_sheet(title=book_tag_lists[i].decode()))  # utf8->unicode  for i in range(len(book_tag_lists)):      ws[i].append(['序號', '書名', '評分', '評價人數', '作者', '出版社'])      count = 1      for bl in book_lists[i]:          ws[i].append([count, bl[0], float(bl[1]), int(bl[2]), bl[3], bl[4]])          count += 1  save_path = 'book_list'  for i in range(len(book_tag_lists)):      save_path += ('-' + book_tag_lists[i].decode())  save_path += '.xlsx'  wb.save(save_path)

這樣我們就大功告成了,查看結果:

打開csv:

ok,完美獲取。

以下是完整程式碼,點擊閱讀原文也可以獲取。

import importlib  import sys  import time  import urllib  import numpy as np  from bs4 import BeautifulSoup  from openpyxl import Workbook    importlib.reload(sys)    # 給出多個User-Agent,防止反爬  hds = [{'User-Agent': 'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6'},         {'User-Agent': 'Mozilla/5.0 (Windows NT 6.2) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.12 Safari/535.11'},         {'User-Agent': 'Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; Trident/6.0)'}]      def book_info(book_tag):      page_num = 0      book_list = []      try_times = 0      while True:          # url拼接          url = 'http://www.douban.com/tag/'                 + urllib.parse.quote(book_tag)                 + '/book?start=' + str(page_num * 15)          print(url)          # 隨機休眠時間,防止反爬          time.sleep(np.random.rand() * 3)          req = urllib.request.Request(url, headers=hds[page_num % len(hds)])          source_code = urllib.request.urlopen(req).read()          plain_text = str(source_code)            ##如果使用requests包,IP容易被封號          # source_code = requests.get(url)          # plain_text = source_code.text          # 創建bs4對象          soup = BeautifulSoup(plain_text,features="lxml")          list_soup = soup.find('div', {'class': 'mod book-list'})            try_times += 1;          if list_soup == None and try_times < 200:              continue          elif list_soup == None or len(list_soup) <= 1:              break          # 遍歷查找的集合,提取細節資訊          for book_info in list_soup.findAll('dd'):              title = book_info.find('a', {'class': 'title'}).string.strip()              desc = book_info.find('div', {'class': 'desc'}).string.strip()              desc_list = desc.split('/')              book_url = book_info.find('a', {'class': 'title'}).get('href')                try:                  author_info = '作者/譯者: ' + '/'.join(desc_list[0:-3])                  pub_info = '出版資訊: ' + '/'.join(desc_list[-3:])                  rating = book_info.find('span', {'class': 'rating_nums'}).string.strip()                  people_num = get_num(book_url)                  people_num = people_num.strip('人評價')              except:                  author_info = '作者/譯者: 暫無'                  pub_info = '出版資訊: 暫無'                  rating = '0.0'                  people_num = '0'                  print('detail info has some error!')                book_list.append([title, rating, people_num, author_info, pub_info])              try_times = 0          page_num += 1          print('Downloading Information From Page %d' % page_num)      return book_list      def get_num(url):      try:          req = urllib.request.Request(url, headers=hds[np.random.randint(0, len(hds))])          source_code = urllib.request.urlopen(req).read()          plain_text = str(source_code)      except :          print('http error!')      soup = BeautifulSoup(plain_text,features="lxml")      people_num = soup.find('div',                             {'class': 'rating_sum'}).findAll(                              'span')[1].string.strip()      return people_num      def get_books(book_tag_lists):      book_lists = []      for book_tag in book_tag_lists:          book_list = book_info(book_tag)          book_list = sorted(book_list, key=lambda x: x[1], reverse=True)          book_lists.append(book_list)      return book_lists      def print_book_lists_excel(book_lists, book_tag_lists):      wb = Workbook(optimized_write=True)      ws = []      for i in range(len(book_tag_lists)):          ws.append(wb.create_sheet(title=book_tag_lists[i].decode()))  # utf8->unicode      for i in range(len(book_tag_lists)):          ws[i].append(['序號', '書名', '評分', '評價人數', '作者', '出版社'])          count = 1          for bl in book_lists[i]:              ws[i].append([count, bl[0], float(bl[1]), int(bl[2]), bl[3], bl[4]])              count += 1      save_path = 'book_list'      for i in range(len(book_tag_lists)):          save_path += ('-' + book_tag_lists[i].decode())      save_path += '.xlsx'      wb.save(save_path)      if __name__ == '__main__':      book_tag_lists = ['電腦','機器學習','linux','android','資料庫','互聯網']      book_lists = get_books(book_tag_lists)      print_book_lists_excel(book_lists, book_tag_lists)

程式碼地址:

https://www.bytelang.com/o/s/c/7QXO_UAlsLU=