Limiting GPU memory usage in a container

As title, how can I limit the usage of GPU memory in a container?

Hi,

Suppose you can do this in a similar way as the below comment:

Thanks.

Hi @AastaLLL ,

The problem is docker won’t calculate cuda and pytorch used memory, if you use docker stats of a pytorch container, it will be ~100MB memory usage, but actually it took over 3GB memory to run the container, most of them are used in GPU.

Hi,

How do you check if the memory is used in the CPU or GPU?
Since Jetson is a shared memory system, the memory might. be used for loading the libraries.

Thanks.

Hi, I am using jetson_stats to check the memory usage.

Hi,

The output from jtop is the info of physical memory which contains both CPU and GPU allocation.

Thanks.

Yes, but on the docker stats, it only used ~100MB memory, on the jtop or htop, ~3GB memory was used.

Hi,

Yes, because it measures the physical memory usage so not only the usage of docker is taken into account.
For example, enabling the Ubuntu system also consumes some memory.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.