How can two containers share the usage of a GPU safely?

I have an application running inside a docker container that access the GPU for computation.

I have been tasked to research how to run two instances of this container. They have to share the one GPU of the machine, and there should not be room for mistaking data from one container to the other.

How can I do this GPU sharing from containers?

Important Note: I have googled it and the results I found involve using Kubernetes or VGPU. My question is about methods that do not include either of them

You can do this with the ordinary container launch, specifying e.g. --gpus=0 for both.

The work will originate from different processes, and will therefore have the usual process isolation.

1 Like

Thank you very much. We will try it this week.