重置GPU显存 Reset GPU memory after CUDA errors

Sometimes CUDA program crashed during execution, before memory was flushed. As a result, device memory remained occupied.

There are some solutions:

1.

Try using:

nvidia-smi --gpu-reset
or simply:
nvidia-smi -r

2.

Although it should be unecessary to do this in anything other than exceptional circumstances, the recommended way to do this on linux hosts is to unload the nvidia driver by doing

sudo rmmod nvidia 

with suitable root privileges and then reloading it with

sudo modprobe nvidia

If the machine is running X11, you will need to stop this manually beforehand, and restart it afterwards. The driver intialisation processes should eliminate any prior state on the device.

This answer has been assembled from comments and posted as a community wiki to get this question off the unanswered list for the CUDA tag

3.

This methods working for me:

check what is using your GPU memory with

sudo fuser -v /dev/nvidia*

 

Your output will look something like this:

                     USER        PID  ACCESS COMMAND
/dev/nvidia0:        root       1256  F...m  Xorg
                     username   2057  F...m  compiz
                     username   2759  F...m  chrome
                     username   2777  F...m  chrome
                     username   20450 F...m  python
                     username   20699 F...m  python

Then kill the PID that you no longer need on htop or with

sudo kill -9 PID.

4.

Or simply reboot:

sudo reboot

 

 

 

 



posted @ 2019-11-28 11:48  Jerry_Jin  阅读(6783)  评论(0编辑  收藏  举报