I am trying to prototype a solution, that would allow me to detach the gpu from a container so that it can successfully allow me to use criu to checkpoint state. I am a little perplexed as to where to start looking at this. My apologies, since I am not particularly familiar with how PCI-E and OS and pass-through feature all interfaces, so if anyone can lend me some direction that would be great
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
How can two containers share the usage of a GPU safely? | 3 | 4615 | July 3, 2023 | |
Suspending NVIDIA-Docker Container | 0 | 1171 | December 9, 2020 | |
Run App local, CUDA kernels on remote machine | 0 | 11464 | September 20, 2011 | |
Getting started on Doca/GpuIO | 0 | 98 | July 1, 2024 | |
Difference between CUDA container and CUDA toolkit | 0 | 815 | June 8, 2020 | |
Nvidia GPU Docker Containers | 1 | 424 | July 26, 2019 | |
Jetson Xavier AGX and Docker Checkpoint issue | 4 | 1008 | January 11, 2023 | |
AWS Docker Issues with First GPU instnce usage | 0 | 35 | November 23, 2024 | |
CUDA container for Jetson Nano | 2 | 467 | October 18, 2021 | |
Different Cuda versions how to work in a single A40 GPU using different docker images | 1 | 1585 | December 1, 2023 |