最近在弄爬虫,有些网站需要传入headers,自己将网站的 headers 一一弄下来形成字典太麻烦了。偶然发现 Chrome 可以生成一个叫 cURL 的东西,里面包含该网页的 headers 。就写了个函数自动提取并返回字典形式的 heades。

先介绍一下怎么获取到网页的 cURL:

爬虫 — 生成的网页 headers
按上面步骤执行就能得到下面的东西:

curl "https://www.baidu.com/" -H "Connection: keep-alive" -H "Cache-Control: max-age=0" -H "Upgrade-Insecure-Requests: 1" -H "User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36" -H "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8" -H "Accept-Encoding: gzip, deflate, br" -H "Accept-Language: zh-CN,zh;q=0.9,en-US;q=0.8,en;q=0.7" -H "Cookie: BIDUPSID=69190AC2F9371D83695F00687C3AB96D; PSTM=1537321028; BD_UPN=12314753; BAIDUID=F6C3199E0D95B688A7B4A126DB21CF65:FG=1; __cfduid=d980a4ba128a4742f240b9a792da30f1e1540616515; BDORZ=B490B5EBF6F3CD402E515D22BCDA1598; MCITY=-125^%^3A; BDRCVFR^[PaHiFN6tims^]=9xWipS8B-FspA7EnHc1QhPEUf; delPer=0; BD_CK_SAM=1; PSINO=6; BD_HOME=0; H_PS_645EC=4e59hoONzQuutKCtXCPV6g0wJR542dDzWVAf8xeRV9zRC1YqNm24lgBZVnaFvrG3DTCl7KqX; BDRCVFR^[PGnakqNNAQT^]=9xWipS8B-FspA7EnHc1QhPEUf; H_PS_PSSID=1458_21120_27401_27376_26350" --compressed

下面是处理代码:

import re
import json

def curl_to_headers(curl):
	
	curl = re.sub('--compressed','',curl)
	curl = re.sub('"','',curl)
	curl = curl.split('-H')
	headers = {}
	for i in curl[1:]:
		s = re.match('(.*?):(.*)',i.strip())
		headers[s.group(1).strip()] = s.group(2).strip()
	
	print(json.dumps(headers,indent=4))
	return headers



if __name__ == '__main__':

	curl = input('请输入需要转换的cURL: ')
	print('\n','*'*50,'\n')
	curl_to_headers(curl)

可以运行该文件,生成相应的 headers ,也可以作为函数模块调用。
生成 headers 如下:

{
    "Connection": "keep-alive",
    "Cache-Control": "max-age=0",
    "Upgrade-Insecure-Requests": "1",
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36",
    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
    "Accept-Encoding": "gzip, deflate, br",
    "Accept-Language": "zh-CN,zh;q=0.9,en-US;q=0.8,en;q=0.7",
    "Cookie": "BIDUPSID=69190AC2F9371D83695F00687C3AB96D; PSTM=1537321028; BD_UPN=12314753; BAIDUID=F6C3199E0D95B688A7B4A126DB21CF65:FG=1; __cfduid=d980a4ba128a4742f240b9a792da30f1e1540616515; BDORZ=B490B5EBF6F3CD402E515D22BCDA1598; MCITY=-125^%^3A; BDRCVFR^[PaHiFN6tims^]=9xWipS8B-FspA7EnHc1QhPEUf; delPer=0; BD_CK_SAM=1; PSINO=6; BD_HOME=0; H_PS_645EC=4e59hoONzQuutKCtXCPV6g0wJR542dDzWVAf8xeRV9zRC1YqNm24lgBZVnaFvrG3DTCl7KqX; BDRCVFR^[PGnakqNNAQT^]=9xWipS8B-FspA7EnHc1QhPEUf; H_PS_PSSID=1458_21120_27401_27376_26350"
}

相关文章:

  • 2022-12-23
  • 2022-12-23
  • 2021-11-17
  • 2021-11-22
  • 2022-12-23
  • 2022-12-23
  • 2022-12-23
  • 2022-12-23
猜你喜欢
  • 2022-01-19
  • 2022-02-26
  • 2022-03-08
  • 2021-12-01
  • 2021-08-30
  • 2022-01-13
  • 2021-08-06
相关资源
相似解决方案