Running two or more docker GPU using docker containers concurrently


I am trying to run two docker containers, each with a neural network application on a single Jetson nano (4.9.201-tegra). Both containers will use GPU for computations. I have observed that almost every time one or both containers stop responding after execution of the first statement that calls any GPU function.

It is like the process is silently killed. There is nothing in ‘dmesg’. Also, around 1GB of RAM is still free.

Is it impossible for two processes to access GPU concurrently on tegra architecture?
Any help on how to do this will be appreciated.

This topic is more related to Jetson Nano, thus moving to Jetson Nano forum for better support, thanks.


You can launch two kernel codes but GPU used from different processes are time-slicing.
Have you tried the same thing without containers to see if the same behavior occurs?


HI, thanks for your response. I have tried the following way in python.

  1. Using the python multiprocessing library, the main process will create a child process.

  2. The main process will move forward to load a neural network. And the child process will load another neural network.

In this case, I see the same thing happening.


Which deep inference library do you use?

For TensorRT, please check a sample below:
There are some CUDA context handling that might be helpful for your issue.