RuntimeError: CUDA out of memory.不一定是显存不够用
RuntimeError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 23.70 GiB total capacity; 7.44 GiB already allocated; 87.88 MiB free; 7.71 GiB reserved in total by PyTorch) If reserved memory i
·
RuntimeError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 23.70 GiB total capacity; 7.44 GiB already allocated; 87.88 MiB free; 7.71 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
解决办法:
我就是在调试test测试文件时出现的错误。由于是直接导入训练好的模型,不需要再次进行反向传播。因此在进行前向传播之前加入或者在出现错误的地方加上:
with torch.no_grad():
(注意后续代码缩进)
参考
更多推荐
已为社区贡献1条内容
所有评论(0)