Hi, I am trying to work with TensorRT inside the L4T-base container since it says “CUDA, TensorRT and VisionWorks are ready to use within the l4t-base container as they are made available from the host by the NVIDIA container runtime.” on the documentation. However, when I try to compile TensorRT examples, I get the following error:
fatal error: NvInfer.h: No such file or directory
In addition, when I try to run /usr/src/tensorrt/bin/trtexec executable, it throws:
&&&& FAILED TensorRT.trtexec # ./trtexec
Also, you can not do literally anything under the /usr directory since it says Read-only file system or xxxx file is unwritable even if you start the container with sudo privileges. I was able to install cuDNN somehow but still the file system of the l4t-base container is broken. Is there any workaround?
The environment is l4t and you will need to install the package manually.
And, we are already checking the authority issue, will update more information with you later.
I have edited the tensorrt.csv file which is utilized by ‘–runtime nvidia’ as a workaround. However, when it comes to installing some .deb packages manually, I get the “read-only file system” error. This is why I am not able to install it manually. I was able to install cuDNN deb package (which comes with sdkmanager) manually inside the container.But still no luck with the TensorRT and it’s dependencies.
Specifically for preventing from TRT to be mounted from host to container, you can try to remove this file from the jetson host filesystem:
/etc/nvidia-container-runtime/host-files-for-container.d/tensorrt.csv
Thanks for the feedback. I couldn’t get a chance to try your solutions until now. It seems it works if one bypasses those csv files by removing them from host filesystem.
If someone wants to bypass all the things mounted from host that are causing similar issues; this link https://elinux.org/Jetson_Zoo#Docker might be helpful too for running the official nvidia l4t base with jetson gpu without the –runtime feature.