So, nvidia’s lt4-base docker image, mounts cuda,cudnn,tensort libraries from jetson host, which is achieved by custom nvidia-container-runtime and csv files in
lzzii@jtsna:/etc/nvidia-container-runtime/host-files-for-container.d$ ls
cuda.csv cudnn.csv l4t.csv tensorrt.csv
my question is, how to leverage this in custom Dockerfile, for my own containers based on ubuntu.
How to tell the runtime to (auto)mount it for mine custom containers as well… ?
Please launch the container with --runtime nvidia and then it will mount the libraries.
You can add a new csv file in /etc/nvidia-container-runtime/host-files-for-container.d also to enable the custom library access.
lzzii@jtsna:~$ sudo docker run -it --rm --runtime nvidia ubuntu
root@9d9dc54ec068:/# cat /usr/local/cuda-10.2
cat: /usr/local/cuda-10.2: No such file or directory
lzzii@jtsna:~$ sudo docker run -it --rm --runtime nvidia nvcr.io/nvidia/l4t-base:r32.5.0
root@1f4b758d891d:/# cat /usr/local/cuda-10.2
cat: /usr/local/cuda-10.2: Is a directory
Meaning with l4t- images directories/files (specified in /etc/nvidia-container-runtime/host-files-for-container.d/) will get auto-mounted , but for ubuntu image it will not.
What logic decides this automounts, and how to reproduce it for “ubuntu:bionic” image, without having to mount them manually.