We are trying to run accelerated ML code inside of a docker container on Jetson Orin Nano 8GB Dev-Kit.
We followed the installation tutorial on: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit but were not able to run the sample workload described there.
nvidia-smi call is not available neither inside the ubuntu docker that gets pulled nor on the Jetpack OS. I tried to call tegrastats instead, which works on the host but unfortunately not inside the container.
Can somebody support us on which container we have to pull to get CUDA access working?
We are based on Jetpack 6, so this shouldn’t be an issue.
The call I used is directly from your Tutorial:
docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi
But why is the Jetpack important for that? From my understanding nvidia-smi should be provided inside the docker guest and should not use executables from the host process?
As already said, our system is based on Jetpack 6, but is missing nvidia-smi somehow. I have to check with our Linux developers why it is not available. Either way it’s not clear to me why I need to have nvidia-smi running on my host system to access this inside of my docker container. Can you please elaborate, why this is needed?
Or do we need to run a special version of a ubuntu container to get nvidia-smi?
As i have seen there is a container simply called “ubuntu” on your nvidia ngc catalog. May it be that on your official Jetpack docker resources link to this instead of the official nvidia docker in docker hub?