使用Anaconda3-2018.12-Windows-x86_64.exe
下载链接:[https://pan.baidu.com/s/1Are0SRyxZJ4LfsJl8dKMwA]
提取码:5eq3

引入库
1.import os
2.import requests
3.from bs4 import BeautifulSoup
目标地址
url=‘http://www.biquw.com/book/94/’

解析网址
r = requests.get(url)
html= r.text
进阶版爬虫爬取小说
提取相关所需的文档,使用lxml解析
soup =BeautifulSoup(html, ‘lxml’)
在网页上寻找ul标签中内容
data_list=soup.find(‘ul’)

通过for对抓取的li进行分布下载
for book_data in data_list.find_all(‘a’):
url=‘http://www.biquw.com/book/94/’+book_data[‘href’]
datail_html=requests.get(url).text
soup = BeautifulSoup(datail_html,‘lxml’)
book_content=soup.find(‘div’,{‘id’:‘htmlContent’}).text
if not os.path.exists(“小说”):
os.mkdir(“小说”)
with open(‘小说/’+book_data.text+".txt",“a”,encoding=“UTF-8”)as f:
f.write(book_content)

执行出相关截图
进阶版爬虫爬取小说

运行出代码:

import os
import requests
from bs4 import BeautifulSoup
#1.发送请求

url = ‘http://www.biquw.com/book/94/’
r = requests.get(url)
html= r.text

#2.解析并提取数据
soup =BeautifulSoup(html, ‘lxml’)
data_list=soup.find(‘ul’)
for book_data in data_list.find_all(‘a’):
url = ‘http://www.biquw.com/book/94/’+book_data[‘href’]
datail_html = requests.get(url).text
soup = BeautifulSoup(datail_html, ‘lxml’)
book_content=soup.find(‘div’,{‘id’:‘htmlContent’}).text
if not os.path.exists(“小说”):
os.mkdir(“小说”)
with open(‘小说/’ +book_data.text +".txt",“a”,encoding=“UTF-8”)as f:
f.write(book_content)

相关文章: