首先我们检测ip是否可用:
1.对于免费代理的检测
#免费代理或不用密码的代理 url = \'http://httpbin.org/get\' proxy = \'127.0.0.0:8000\' proxies = { \'http\': \'http://\' + proxy, \'https\': \'https://\' + proxy, } response = requests.get(url, proxies=proxies, verify=False) print(response.text)
注:这里的proxy改成你要检测的ip即可
返回结果中:"origin": "127.0.0.0" #即为你的代理,可用
2.对于付费代理的检测:
#测试付费代理和加密代理 url = \'http://httpbin.org/get\' proxy_host = \'127.0.0.0\' proxy_port = \'8000\' proxy_user = \'root\' proxy_pass = \'root\' proxy_meta = \'http://%(user)s:%(pass)s@%(host)s:%(port)s\' % { \'host\': proxy_host, \'port\': proxy_port, \'user\': proxy_user, \'pass\': proxy_pass, } proxies = { \'http\': proxy_meta, \'https\': proxy_meta, } response = requests.get(url, proxies=proxies) print(response.text)
将上面的ip和账户之类的换成你自己的即可(参照阿布云给的示例进行付费检测)
下面将代理运用到scrapy框架中:
在scrapy框架中有两种方法进行
1.直接编写在scrapy爬虫代码中
2.利用中间件middlewares.py进行
现在我将详细讲述下这两种分别如何进行
首先我们需要有一个可用的ip
对于方法一:利用meta函数进行携带即可访问
scrapy爬虫代码中:
import scrapy class ProxySpider(scrapy.Spider): name = \'proxy\' allowed_domains = ["httpbin.org"] def start_requests(self): url = \'http://httpbin.org/get\' proxy = \'127.0.0.0:8000\' proxies = "" if url.startswith("http://"): proxies = "http://"+str(proxy) elif url.startswith("https://"): proxies = "https://"+str(proxy) #注意这里面的meta={\'proxy\':proxies},一定要是proxy进行携带,其它的不行,后面的proxies一定 要是字符串,其它任何形式都不行 yield scrapy.Request(url, callback=self.parse,meta={\'proxy\':proxies}) def parse(self,response): print(response.text)
(好多坑啊,写代码的时候踩着都想流泪)
对于方法二:利用middlewares中间件进行
1.在middlewares.py问件中添加如下代码即可:
#配置代理 class ProxyMiddleware(object): def process_request(self,request,spider): if request.url.startswith("http://"): request.meta[\'proxy\']="http://"+\'127.0.0.0:8000\' # http代理 elif request.url.startswith("https://"): request.meta[\'proxy\']="https://"+\'127.0.0.0:8000\' # https代理
2.在settings.py文件中添加配置
# Enable or disable downloader middlewares # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html DOWNLOADER_MIDDLEWARES = { #这个biquge为你项目的名字,我是爬笔趣阁的,2333~ \'biquge.middlewares.ProxyMiddleware\': 100, }
3.scrapy爬虫代码中正常编写代码,不用做任何修改/添加
import scrapy class ProxySpider(scrapy.Spider): name = \'proxy\' allowed_domains = ["httpbin.org"] # start_urls = [\'http://httpbin.org/get\'] def start_requests(self): url = \'http://httpbin.org/get\' yield scrapy.Request(url, callback=self.parse) def parse(self,response): print(response.text)
ip代理池的应用:https://blog.csdn.net/u013421629/article/details/77884245