Requests库基本使用
一.GET
1.requests 这个库提供了一个 get 方法,调用这个方法,并传入对应的 URL 就能得到网页的源代码
2.URL后面是可以跟上一些参数的,如果想添加两个参数,比如其中name是germey,age是25,URL就可以写成如下内容:http://httpbin.org/get?name=germey&age=25
3.把URL参数通过字典的形式传给get方法的params参数,通过返回信息我们可以判断,请求的链接自动被构造成了:http://httpbin.org/get?age=22&name=ermey
1 # -*- coding: utf-8 -*- 2 """ 3 Created on Fri Apr 17 22:34:59 2020 4 5 @author: ZKYAAA 6 """ 7 8 9 import requests 10 11 print(\'-----------------------实列1-----------------\') 12 #GET发起请求 13 r1=requests.get("https://home.cnblogs.com/u/ZKYAAA/") 14 print(r1.text) 15 16 print(\'-----------------------实列2-----------------\') 17 #返回Headers,url,ip等 18 r2=requests.get("http://httpbin.org/get") 19 print(r2.text) 20 21 print(\'-----------------------实列3-----------------\') 22 #URL请求后面可以跟一些参数,构造请求如下 23 r3=requests.get("http://httpbin.org/get?name=germey&age=25") 24 print(r3.text) 25 26 print(\'-----------------------实列4-----------------\') 27 #params参数的使用,把URL通过字典的形式传给get方法的params参数 28 #通过返回信息可以知道请求的链接被直接构造成URL 29 data={ 30 \'name\':\'germey\', 31 \'age\':\'25\' 32 } 33 r4=requests.get("http://httpbin.org/get",params=data) 34 print(r4.text)
二.JSON
1.网页的返回类型实际上是 str 类型,但是它很特殊,是 JSON 格式的。所以,如果想直接解析返回结果,得到一个 JSON 格式的数据的话,可以直接调用 json 方法。
2.调用 json 方法,就可以将返回结果是 JSON 格式的字符串转化为字典
3.如果返回结果不是 JSON 格式,便会出现解析错误,抛出 json.decoder.JSONDecodeError 异常。
1 # -*- coding: utf-8 -*- 2 """ 3 Created on Fri Apr 17 23:33:17 2020 4 5 @author: ZKYAAA 6 """ 7 8 9 import requests 10 import re 11 12 print(\'-----------------------实列1-----------------\') 13 #JSON直接解析返回结果,得到一个JSON格式的数据,直接调用json方法 14 #下面博客网站返回结果不是 JSON 格式,便会出现解析错误,抛出 json.decoder.JSONDecodeError 异常。 15 r=requests.get("https://home.cnblogs.com/u/ZKYAAA/") 16 print(type(r.text)) 17 print(r.json()) 18 print(type(r.json())) 19 20 print(\'-----------------------实列2-----------------\') 21 #正则表达式 22 r=requests.get("https://static1.scrape.cuiqingcai.com/") 23 pattern=re.compile(\'<h2.*?>(.*?)</h2>\',re.S) 24 titles=re.findall(pattern,r.text) 25 print(titles)
三.抓取二进制数据
1.打印 Response 对象的两个属性,一个是 text,另一个是 content。运行结果前两行是 r.text 的结果,最后一行是 r.content 的结果。前者出现了乱码,后者结果前带有一个 b,这代表是 bytes 类型的数据.
2.将提取到的信息保存下来,用 open 方法,它的第一个参数是文件名称,第二个参数代表以二进制的形式打开,可以向文件里写入二进制数据。
1 # -*- coding: utf-8 -*- 2 """ 3 Created on Sat Apr 18 00:07:43 2020 4 5 @author: ZKYAAA 6 """ 7 8 9 import requests 10 11 print(\'-----------------------实列1-----------------\') 12 #得到乱码二进制文件 13 r=requests.get(\'https://github.com/favicon.ico\') 14 print(r.text) 15 print(r.content) 16 17 print(\'-----------------------实列2-----------------\') 18 #用 open 方法,得到图片 19 r=requests.get(\'https://github.com/favicon.ico\') 20 with open(\'favicon.ico\',\'wb\') as f: 21 f.write(r.content)
四.添加headers
设置RequestHeaders信息的,如果不设置,某些网站会发现这不是一个正常的浏览器发起的请求,网站可能会返回异常的结果,导致网页抓取失败。
1 # -*- coding: utf-8 -*- 2 """ 3 Created on Sat Apr 18 11:51:44 2020 4 5 @author: ZKYAAA 6 """ 7 8 9 import requests 10 11 headers={ 12 \'User-Agent\':\'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0)\' 13 } 14 r=requests.get(\'https://static1.scrape.cuiqingcai.com/\') 15 print(r.text)
四.POST 请求
1.请求 http://httpbin.org/post,该网站可以判断如果请求是 POST 方式,就把相关请求信息返回。
2.如果成功获得了返回结果,则 form 部分就是提交的数据,就证明 POST 请求成功发送了。
1 # -*- coding: utf-8 -*- 2 """ 3 Created on Sat Apr 18 12:06:00 2020 4 5 @author: ZKYAAA 6 """ 7 8 9 import requests 10 11 data={\'name\':\'germey\',\'age\':\'25\'} 12 r=requests.post(\'http://httpbin.org/post\',data=data) 13 print(r.text)
五.响应
1.下面代码,打印输出status_code属性得到状态码,输出headers属性得到响应头,输出cookies属性得到Cookies,输出url属性得到URL,输出history属性得到请
求历史。
2.运行结果,headers 和 cookies 这两个属性得到的结果分别是 CaseInsensitiveDict 和 RequestsCookieJar 类型。
1 # -*- coding: utf-8 -*- 2 """ 3 Created on Sat Apr 18 12:15:32 2020 4 5 @author: ZKYAAA 6 """ 7 import requests 8 9 r=requests.get(\'https://static1.scrape.cuiqingcai.com/\') 10 print(type(r.status_code),r.status_code) 11 print(type(r.headers),r.headers) 12 print(type(r.cookies),r.cookies) 13 print(type(r.url),r.url) 14 print(type(r.history),r.history)
3.requests 还提供了一个内置的状态码查询对象 requests.codes
4.通过比较返回码和内置的成功的返回码,来保证请求得到了正常响应,输出成功请求的消息,否则程序终止,这里我们用 requests.codes.ok 得到的是成功的状态
码 200;就不用再在程序里面写状态码对应的数字了,用字符串表示状态码会显得更加直观。
1 # -*- coding: utf-8 -*- 2 """ 3 Created on Sat Apr 18 12:29:30 2020 4 5 @author: ZKYAAA 6 """ 7 8 9 import requests 10 11 r=requests.get(\'https://static1.scrape.cuiqingcai.com/\') 12 exit() if not r.status_code==requests.codes.ok else print(\'Request Successfully\')
5.返回码和相应的查询条件
比如想判断结果是不是 404 状态,可以用 requests.codes.not_found 来比对
1 # 信息性状态码 2 100: (\'continue\',), 3 101: (\'switching_protocols\',), 4 102: (\'processing\',), 5 103: (\'checkpoint\',), 6 122: (\'uri_too_long\', \'request_uri_too_long\'), 7 8 # 成功状态码 9 200: (\'ok\', \'okay\', \'all_ok\', \'all_okay\', \'all_good\', \'\\o/\', \'✓\'), 10 201: (\'created\',), 11 202: (\'accepted\',), 12 203: (\'non_authoritative_info\', \'non_authoritative_information\'), 13 204: (\'no_content\',), 14 205: (\'reset_content\', \'reset\'), 15 206: (\'partial_content\', \'partial\'), 16 207: (\'multi_status\', \'multiple_status\', \'multi_stati\', \'multiple_stati\'), 17 208: (\'already_reported\',), 18 226: (\'im_used\',), 19 20 # 重定向状态码 21 300: (\'multiple_choices\',), 22 301: (\'moved_permanently\', \'moved\', \'\\o-\'), 23 302: (\'found\',), 24 303: (\'see_other\', \'other\'), 25 304: (\'not_modified\',), 26 305: (\'use_proxy\',), 27 306: (\'switch_proxy\',), 28 307: (\'temporary_redirect\', \'temporary_moved\', \'temporary\'), 29 308: (\'permanent_redirect\', 30 \'resume_incomplete\', \'resume\',), # These 2 to be removed in 3.0 31 32 # 客户端错误状态码 33 400: (\'bad_request\', \'bad\'), 34 401: (\'unauthorized\',), 35 402: (\'payment_required\', \'payment\'), 36 403: (\'forbidden\',), 37 404: (\'not_found\', \'-o-\'), 38 405: (\'method_not_allowed\', \'not_allowed\'), 39 406: (\'not_acceptable\',), 40 407: (\'proxy_authentication_required\', \'proxy_auth\', \'proxy_authentication\'), 41 408: (\'request_timeout\', \'timeout\'), 42 409: (\'conflict\',), 43 410: (\'gone\',), 44 411: (\'length_required\',), 45 412: (\'precondition_failed\', \'precondition\'), 46 413: (\'request_entity_too_large\',), 47 414: (\'request_uri_too_large\',), 48 415: (\'unsupported_media_type\', \'unsupported_media\', \'media_type\'), 49 416: (\'requested_range_not_satisfiable\', \'requested_range\', \'range_not_satisfiable\'), 50 417: (\'expectation_failed\',), 51 418: (\'im_a_teapot\', \'teapot\', \'i_am_a_teapot\'), 52 421: (\'misdirected_request\',), 53 422: (\'unprocessable_entity\', \'unprocessable\'), 54 423: (\'locked\',), 55 424: (\'failed_dependency\', \'dependency\'), 56 425: (\'unordered_collection\', \'unordered\'), 57 426: (\'upgrade_required\', \'upgrade\'), 58 428: (\'precondition_required\', \'precondition\'), 59 429: (\'too_many_requests\', \'too_many\'), 60 431: (\'header_fields_too_large\', \'fields_too_large\'), 61 444: (\'no_response\', \'none\'), 62 449: (\'retry_with\', \'retry\'), 63 450: (\'blocked_by_windows_parental_controls\', \'parental_controls\'), 64 451: (\'unavailable_for_legal_reasons\', \'legal_reasons\'), 65 499: (\'client_closed_request\',), 66 67 # 服务端错误状态码 68 500: (\'internal_server_error\', \'server_error\', \'/o\\\', \'✗\'), 69 501: (\'not_implemented\',), 70 502: (\'bad_gateway\',), 71 503: (\'service_unavailable\', \'unavailable\'), 72 504: (\'gateway_timeout\',), 73 505: (\'http_version_not_supported\', \'http_version\'), 74 506: (\'variant_also_negotiates\',), 75 507: (\'insufficient_storage\',), 76 509: (\'bandwidth_limit_exceeded\', \'bandwidth\'), 77 510: (\'not_extended\',), 78 511: (\'network_authentication_required\', \'network_auth\', \'network_authentication\')
六.文件上传
上传的文件(favicon.ico)需要和当前脚本在同一目录下
1 # -*- coding: utf-8 -*- 2 """ 3 Created on Sat Apr 18 12:43:19 2020 4 5 @author: ZKYAAA 6 """ 7 8 import requests 9 files={\'file\':open(\'favicon.ico\',\'rb\')} 10 r=requests.post(\'http://httpbin.org/post\',files=files) 11 print(r.text)
七.Cookies
1.调用cookies属性即可成功得到Cookies,可以发现它是RequestCookieJar类型。然后用items方法将其转化为元组组成的列表,遍历输出每一个Cookie的名称和值,
实现 Cookie 的遍历解析。
2.也可以直接用 Cookie 来维持登录状态,以 GitHub 为例来说明一下,首先登录 GitHub,然后将 Headers 中的 Cookie 内容复制下来,将其设置到 Headers 里面,然后发送请求
1 # -*- coding: utf-8 -*- 2 """ 3 Created on Sat Apr 18 13:01:08 2020 4 5 @author: ZKYAAA 6 """ 7 8 9 import requests 10 headers={ 11 \'Cookie\': \'tz=Asia%2FShanghai; _octo=GH1.1.618994212.1583485322; _ga=GA1.2.986116530.1583485413; experiment:homepage_signup_flow=eyJ2ZXJzaW9uIjoiMSIsInJvbGxPdXRQbGFjZW1lbnQiOjEwLjc1NTI3OTI2NjkxNjk3OSwic3ViZ3JvdXAiOiJjb250cm9sIiwiY3JlYXRlZEF0IjoiMjAyMC0wMy0wNlQwOTowNTozMi4zNDRaIiwidXBkYXRlZEF0IjoiMjAyMC0wMy0wNlQwOTowNTozMi4zNDRaIn0=; _device_id=d419ca4206f0ba9c652352f41c912f2a; user_session=13tsGVIeg9nHVFx_6UZcdEAxSylfsbCMIbfcKXaA3MpJHas1; __Host-user_session_same_site=13tsGVIeg9nHVFx_6UZcdEAxSylfsbCMIbfcKXaA3MpJHas1; logged_in=yes; dotcom_user=zhukeyu250712; tz=Asia%2FShanghai; has_recent_activity=1; _gat=1; _gh_sess=z6Vme0mvzH4iTa%2BqdNUDg1qFf20KyLIfPIqnG5gSAhFMP1r%2BfPu7AxogyVWgDueOX4pafiG3nkk2Gu8LMmOy5O2S2H1QX2KkHA5PlytfRVBLNUIqO%2Bu%2BOLRjiVZoR7iBL5D2SnOBLkEF%2FomBtxKN0eBLr3ltKMd1krfOiBnulTxFYK0g3wz4kvtWkXnj7uLEdmGxyn6mXCzixSZPHMAMjjqHXPWR1x5inUcF9j4PHsqFF1mRbn7rag5%2B9OEqKUKEv%2BeQ3mEKGW9Dan1HMQJqOae3GwkzcDKVv1rnY8S5GFVBlHztLL5Jxq9vYtaikgn3%2BG3hrP9W7cXXK8OOOTOvNQilQrF5uVbAFbE0KpPoX76KF6ndydgzL5r1qv9iUorh%2FiDg%2Fw8teqVvZPayhbSkUXynAagU%2BNalZXI6zVpn5UlxP0K60EvyeLugIeXXkv7SYju4BAns0CBnCBrhp1d4CPSwWvoh3lMGMCutzQL2EhK8Be%2Fff35Bbb5SPpk5qzLhbmBhgsnEw5n7M2ZfhhKJRf12HMAU5v9BtH4oPYTKR4dSlP%2B%2BRUuMB%2FulaHPC3WwlUL8pquHQMrqBk%2Bjtwv1bZU%2FvbJRfFdKunyin7cyoSDn6JoHi8aTgMBzq5MU4Gi4czFH07td94KEdvQt7dbs35iRsIYucyS4lKhYs8UxCw7t0kqJ%2ByqBRms0EI4poIGGXu8uJ83y1S9tgH9ODJvdQ1McwTdreSp2ms0rX%2BartoXzQerw2ahC9N7nrfNb5aU5ZWOosRUNzCTWat9h%2FC0eJSoluT4SejAx2kpDBmOvIRKXnpDLvPGnm7NoyqAQINYWVXzyFCEZPYWLy2SLr4HH03%2B%2FwFXisMwFelsP7yLsJ4l%2FUCc0xoDtVhCLBgDtieeHCt4nrzG75h2QQT2XDOcYzzvcR79u3fm%2B9slNDpQBHQvebj518Z1kv18iyl6NK%2FXvVjuOvPfKk1a44TME%2F9KaUVtkZXF922hC4L5e3yI8Ebcu6f%2BtxKCITBuNkbPJsWVSQtBae4MGV7zjOtDmTxJ1%2FpR7LpqbiW9KJ%2BRYUWWr2Vk6ZBTTHxE1mUEpllDbUmq%2FfbpURheR7bgN5WDTCL92kqZfkAw0ETU8tgdsybWO5FS4a%2FP8c4rfjJqjsTtuWkXfGq3BEvgzRnQDCB7FQmtCdIlRd2NSoBSt0WVY6SRfQM0hvmi9IlQptA3HDVMkLOnj68EzPLKo4eF6FesZYmS8h1fgaDGrigIvDENiYGDYtVJDaWRcH%2FhtCidKL6cv4QjPCC662JArUkA%2B%2BPv%2FCJg2JaBgAhyCd0Pa38svtFR8b%2BNtlZVZzRiAUlr5ZrxZffBYEUsLNyuzPol8mHDdYWuWkZTqhOoMkvwW4IQJjZMHahwONUbE52FmTOc7tL%2FvQPzQjZIGcci0f%2FmX7sko6CH8l7Svxsm%2FWNIlkPnrURnh6cQ2u5e18W6N20KWqYKidKdCCLEdk8jjp02etoxI5qk6ydAoOT3PDeJYSTPC8QCJXpCkYPhB1cxsaJ6Bhcp1K%2F7rMPZ%2B2Djg8atAbt7OQE0r2ly7D%2BTWc%2B6TwC%2FjjrFz9cbcYuwuU5Tf5Iqn7iBas6O6PpQe7yN3CSRrixcJCemq%2F7G8h3IVc2GmPU6fwPL6UQkQBW2xvjBFtWtAw6IqDLeDNOHg9PktOj2c2cQ%2FNoSHWlPKu6ABUdWAFPgW%2FQdjgbvnHkOK%2BiSkE3qoaNl9g2Pag0RGEKo4hmdVbQZO6kdJIjCYZoeRioq4hPEaLJkwx4AmEMaj2C7oEMBQ%2FXNHWuExwEOYYRyX9iRgWxifLxtPTar38%2FfoE50nSgmBtkOGtFM46SKve1P3HMu7Ur%2BM6%2Bwg59fIstebabSr3AGXXEEC5yiW6PWVp4k1jLkQ8D4vPtBM5Fz%2FTMQpKlO8rFw57I6WcOJW3qwEFFRdTGfnjrd1aojKvfLuuUZd6--24aBYhYf2Fk9pXzk--WRWJzfrd%2FUPsESZCKgMGcQ%3D%3D\', 12 \'User-Agent\': \'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.3\', 13 } 14 r=requests.get(\'https://github.com/\',headers=headers) 15 print(r.text)
3.也可以通过 cookies 参数来设置 Cookies 的信息,可以构造一个 RequestsCookieJar 对象,然后把刚才复制的 Cookie 处理下并赋值
4.首先新建一个RequestCookieJar对象,然后将复制下来的cookies利用split方法分割,接着利用set方法设置好每个Cookie的key和value,最后通过调用 requests 的 get 方法并传递给 cookies 参数即可。
1 # -*- coding: utf-8 -*- 2 """ 3 Created on Sat Apr 18 13:18:32 2020 4 5 @author: ZKYAAA 6 """ 7 8 9 import requests 10 11 cookies= \'tz=Asia%2FShanghai; _octo=GH1.1.618994212.1583485322; _ga=GA1.2.986116530.1583485413; experiment:homepage_signup_flow=eyJ2ZXJzaW9uIjoiMSIsInJvbGxPdXRQbGFjZW1lbnQiOjEwLjc1NTI3OTI2NjkxNjk3OSwic3ViZ3JvdXAiOiJjb250cm9sIiwiY3JlYXRlZEF0IjoiMjAyMC0wMy0wNlQwOTowNTozMi4zNDRaIiwidXBkYXRlZEF0IjoiMjAyMC0wMy0wNlQwOTowNTozMi4zNDRaIn0=; _device_id=d419ca4206f0ba9c652352f41c912f2a; user_session=13tsGVIeg9nHVFx_6UZcdEAxSylfsbCMIbfcKXaA3MpJHas1; __Host-user_session_same_site=13tsGVIeg9nHVFx_6UZcdEAxSylfsbCMIbfcKXaA3MpJHas1; logged_in=yes; dotcom_user=zhukeyu250712; tz=Asia%2FShanghai; has_recent_activity=1; _gh_sess=lsguByvCLbUj%2FuN8NDSF56S9fDwsCf4%2Fg%2FhZkBq6nNR1qPWey1Rc2uzoS%2BLEyKgGb0QoD2TwumohZbM%2BgcW9TQdSkQpNV8Koi7ByqskL5PE3huQr7cN7yIBF5CSoqVow1xN1rLsFGUBGkPXXwEuIgvf9jUGYuNiyPsIAVv2OGo9O%2BJH2e9BiHE2R5A5HCsYH4aHUIOyiHwXjcvVukd4nkw%2Fz3%2Beq8ZDc84RYIq92ieuupIVCjxdxKjhkX3HEkulLZRRwMRd6rF96jzBdVRx%2BthC4vIukaxwjULZkJpWy4aFb0tN4hwcgMEje9RTczjH49Gbnjz5WGrke3oKplubChSDr1KhQmq9mKktovpFZmqXD78Ooe%2FbY6eYPMSeVcSNO%2Fo0dY2fnAFz9mxLiQBjYAYv%2FnJJTgS539K2bu0aXcv7mP0y06y0xNTS02%2F%2B7objIsQZU7z1cqZB6ZLlCshmekYYURWQfJ11StlagVxoq%2BkTdELg%2BR1J5Rp3tqNZ0IFVEdJLTA%2BA7FjwJxEFp%2FsM1zIhtjqqncjUvYGPLF783bCFUtxAzeo6COXVLk1iN1Xsk5J0xievMOWwoRiyn1CJS39wo2tY1WMzXFvdUVGNng%2Fb5Otsz71fRZwee0mJeiUek7BxT9wjdBRumubrIJJcT13mFFhzlrerAW%2FdV4GEobs16AQyalPm6Hx6itdLup%2Ft1ufqO6wRjSoIC4GZR9KifnVctNl6eLpaOc5tXvrwvYS1bbwKdePLx7AcP4DnTHGOTaJ3fLjzOzDVz2DICsj5PlmexiAbNF45kfG0y4FDCMIH5dHtd0uhhW6FL21Z7iqGOb6tUYUxqgRjT3Wq323ffxpEge%2FoEewdUwhkMDH%2F1TAYsvz6S51rDdjX7YbQcBaF5M6XGgmwP92yc9%2BU6pzovNaqc1bFBJ%2F8ndIhxYc7xRblOlgtOgNY9iX5X3wxzviUXE8JsNq7NO1nLjeDIgzKtwtRbwC1cWRffzcpcZIWNzIlWuJUfK%2BlHQ9bYtbLxo%2FDoj0DrZMO9VdW11SJcW4XAUNDF8rl0zYpz8gVMhZVjrMH6Rx2qh6M4ysKsGJ9ORI3Rnjvl8HuiS8BtBf%2F6G7hOsqFgIO2owUknmkvbrp%2FkEAKQh4wkqxDLR5AJnJQQGjKL3IBM%2B6oX0HnRkRjf1wiR6Na2hZ0huk7vzCkqsITcLnvrKErSDJMoPFuVkA2RSEO5Lmn3Bz8BJt%2Be5BJhlH1yTzEF9a%2F6g8niga8IXly6mhY%2B5a7i%2FgC4CZA%2BcK1%2Fc3Sy7lDX681ws0AbnGBwam7YPKZ4f1oaKRnTg4s9kHQfvMqpAh5Y9uRkBxp72XXGDMYT%2BNf5pPd0huq74AwHaOE4u6cV7N%2Fro%2BitcD%2Be9KJgE8eGYurpvsdKMAfSkIOPiWunsj6t8MxJwG5XDUwyi8tQOQRjf5XH72UsJl2LDcF05vXdhZNSJRTsPpLWUYMAxJ7FBjS3KlW7LRIUJB2uhoYXnsd7SuPln2lCZBkLlRxGaKpNgcd6PBHqGMNEgBsORPVR64SWeOIheaHfRkodlKmJnNB%2BS0RwAct%2B8x7mXsImkhx0qUI7d440rL%2FdizTzOYgEYI%2Fi8BJzM1WH1DQwWbqDa9K0q5MW1IIojGMC2qYlBDXZE86eWNIqPCFjLAU9rq4FURPchPdTIgdXgBFJqVUYFzGYPErRmWZxt7ijZGRmzFBzghjsuWxAZXLWplCjLt7aV1zuTh6TsrWGYbIhNslFiZb49o6cZZNKzPUIWoPvzmkg4i9ZJ37SxXF6ZjGg1VwtTuQxBUlpwXtaxiRyDQXdEpNmedGsTZ5S6%2FTe16i58GQF2TvGwwbfNB8wda4kHQFsuC7gXe5Z3gKfyTnPvaVbRlgqsGznsHKz%2BJ3vcNoS%2ByRMEX23KMstPdG1c0AU%2Bbhepy66aSDM2ruaeAngbzNPRBYOspAcIvSB--HWIJoGR6ZcS1G9V2--XF1Eb9LMIV8ztMW4gW6HoQ%3D%3D\' 12 jar=requests.cookies.RequestsCookieJar() 13 headers={ 14 \'User-Agent\': \'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.3\', 15 } 16 for cookie in cookies.split(\';\'): 17 key,value=cookie.split(\'=\',1) 18 jar.set(key,value) 19 r=requests.get(\'http://github.com/\',cookies=jar,headers=headers) 20 print(r.text)