import os
import time

gpu_device = 0
cmd = "CUDA_VISIBLE_DEVICES=0 python train.py"


def gpu_info(gpu_index):
    info = os.popen('nvidia-smi|grep %').read().split('\n')[gpu_index].split('|')
    memory = int(info[2].split('/')[0].strip()[:-3])
    return memory

# print(gpu_info(0))


while True:
    memory = gpu_info(gpu_device)
    if memory < 1000:
        break
    time.sleep(30)
    print("waiting | gpu ", str(gpu_device), " mem is ", memory)


os.system(cmd)

 

每30s轮询一次gpu_device 的显存,当显存低于1000M的时候,立刻运行cmd程序。

相关文章:

  • 2021-06-09
  • 2021-12-26
  • 2022-12-23
  • 2022-12-23
  • 2022-01-25
  • 2022-01-08
  • 2022-12-23
猜你喜欢
  • 2022-12-23
  • 2021-04-29
  • 2022-12-23
  • 2021-12-23
  • 2022-12-23
  • 2022-12-23
  • 2021-07-29
相关资源
相似解决方案