Scrapy去重原理
scrapy本身自带一个去重中间件
  scrapy源码中可以找到一个dupefilters.py去重器
 
源码去重算法
# 将返回值放到集合set中,实现去重

def request_fingerprint(request, include_headers=None):
    if include_headers:
            include_headers = tuple(to_bytes(h.lower())
                                for h in sorted(include_headers))
    cache = _fingerprint_cache.setdefault(request, {})
    if include_headers not in cache:
        fp = hashlib.sha1()
        fp.update(to_bytes(request.method))
        fp.update(to_bytes(canonicalize_url(request.url)))
        fp.update(request.body or b'')
        if include_headers:
            for hdr in include_headers:
                if hdr in request.headers:
                    fp.update(hdr)
                    for v in request.headers.getlist(hdr):
                        fp.update(v)
        cache[include_headers] = fp.hexdigest()
    return cache[include_headers]

 

相关文章:

  • 2021-08-13
  • 2021-11-28
  • 2021-04-06
  • 2022-12-23
  • 2022-02-09
  • 2021-12-04
  • 2022-12-23
猜你喜欢
  • 2021-12-02
  • 2022-12-23
  • 2022-01-23
  • 2022-12-23
  • 2022-12-23
  • 2021-10-25
  • 2022-12-23
相关资源
相似解决方案