We are using tooling for deployment of our application using Docker images.
However, the tool does not support the nvidia-docker runtime, nor is it planning to add support for this. We still would like to make use of the software components offered by JetPack.
In view of this, what is the best way to reproduce the installation procedure of the Nvidia SDK Manager in a Dockerfile? Is there a script available that we can run? Currently, I am manually installing all the .deb files and L4T drivers. However, when installing the DeepStream SDK it complains about missing symlinks. It would be great if the script were provided that is used by the SDK Manager internally!
Thanks for you answer, but I’m afraid that it’s not applicable to my use case since our tool, Balena, has no support for the Nvidia Docker runtime. It takes a Docker image as input and then runs it on edge devices using the standard Docker runtime. For more information, see this forum post, where it is said that they do not support Nvidia runtime:
It’s recommended to check if GPU works well on your container first.
If GPU can be accessed without issue, you still need an L4T image to enable the environment for Jetson.
Anyone found a workaround or solution to this? I tried to change the docker daemon.json file to get the default nvidia docker runtime but BalenaOS is based on Yocto so I’m stuck going forward. I already tried building docker images with the base image nvcr.io/nvidia/l4t-base:r32.2 but I get a segmentation fault when running the device info in python.