bainianminguo

1、通过requests.get方法

r = requests.get("http://200.20.3.20:8080/job/Compile/job/aaa/496/artifact/bbb.iso")

with open(os.path.join(os.path.dirname(os.path.abspath("__file__")),"bbb.iso"),"wb") as f:
    f.write(r.content)

 

2、urllib2方法

import urllib2
print "downloading with urllib2"
url = \'"http://200.21.1.22:8080/job/Compile/job/aaa/496/artifact/bbb.iso"\'
f = urllib2.urlopen(url)
data = f.read()
with open(os.path.join(os.path.dirname(os.path.abspath("__file__")),"bbb.iso"),"wb") as f:
    f.write(data)

 

3、下载大文件

很多时候,我们下载大文件python报内存错误,你打开任务管理器,会很明显的看到python程序的内存在不停的增大,最终会导致程序奔溃

 

self._content = bytes().join(self.iter_content(CONTENT_CHUNK_SIZE)) or bytes()
MemoryError

 

我们可以采用下面的代码来解决大文件下载的问题

import requests


r = requests.get(
    url="http://10.242.255.110/K%3A/SSL/Feature_bridge_mode/alpha/20210809/SDP2.1.7.15_B_Build20210809.run",
    stream=True
)



f = open("bbb123","wb")

for chunk in r.iter_content(chunk_size=1024):
    if chunk:
        f.write(chunk)

  

 

分类:

技术点:

相关文章:

  • 2021-09-29
  • 2021-11-14
  • 2021-12-27
  • 2021-07-24
  • 2021-10-19
  • 2021-11-25
  • 2021-09-07
猜你喜欢
  • 2021-11-07
  • 2021-11-28
  • 2021-10-08
  • 2021-12-03
  • 2021-09-29
  • 2021-12-24
相关资源
相似解决方案