./imagenet: error while loading shared libraries: /usr/lib/aarch64-linux-gnu/libnvinfer.so.7: file too short

Hi,
I am trying to achieve a simple task, create two docker containers
a) container 1: which basically does the image capture from the camera and handle GPIOs to do the mechanical part of a robot I am building
b) Container 2: Do pure AI stuff, I plan to provide a rest-server in this container which could be called from container 1 to pass an image and returned the type of object identified.

For container 2, I plan to use any builtin container which uses nvidia gpu container runtime. I am running a yocto build using meta-tegra dunfell-l4t-r32.4.3

Tried using dustynv/jetson-inference:r32.4.3 but it has missing libraries

root@1fce794aad39: /jetson-inference/build/aarch64/binroot@1fce794aad39:/jetson-inference/build/aarch64/bin# ./imagenet images/jellyfish.jpg images/test/jellyfish.jpg
./imagenet: error while loading shared libraries: /usr/lib/aarch64-linux-gnu/libnvinfer.so.7: file too short

0;root@1fce794aad39: /jetson-inference/build/aarch64/binroot@1fce794aad39:/jetson-inference/build/aarch64/bin# ls -l /usr/lib/aarch64-linux-gnu/libnvinfer*
lrwxrwxrwx 1 root root 19 Oct 27 19:46 /usr/lib/aarch64-linux-gnu/libnvinfer.so -> libnvinfer.so.7.1.3
lrwxrwxrwx 1 root root 19 Oct 27 19:46 /usr/lib/aarch64-linux-gnu/libnvinfer.so.7 -> libnvinfer.so.7.1.3
-rw-r–r-- 1 root root 0 Jul 1 20:05 /usr/lib/aarch64-linux-gnu/libnvinfer.so.7.1.3
lrwxrwxrwx 1 root root 26 Oct 27 19:46 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so -> libnvinfer_plugin.so.7.1.3
lrwxrwxrwx 1 root root 26 Oct 27 19:46 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7 -> libnvinfer_plugin.so.7.1.3
-rw-r–r-- 1 root root 0 Jul 1 20:05 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.3

I have added this in my yocto conf
MAGE_INSTALL_append = " nvidia-docker cudnn tensorrt libvisionworks libvisionworks-sfm libvisionworks-tracking cuda-libraries"

Not sure what I am missing here…

Hi @mjemv, is TensorRT and the JetPack components (like CUDA Toolkit) installed in your Yocto build? Normally these are mounted from the OS into the containers by the nvidia-docker runtime: https://github.com/NVIDIA/nvidia-docker/wiki/NVIDIA-Container-Runtime-on-Jetson#mount-plugins

It does not appear that these are getting mounted correctly into the container.

Normally there are CSV files found under /etc/nvidia-container-runtime/host-files-for-container.d/ that list the libraries to be mounted into the container.

For more info, please refer to this meta-tegra GitHub topic - https://github.com/OE4T/meta-tegra/issues/230#issuecomment-577627613

You may want to post to that GitHub if you have further issues with Yocto, as I have only tested the jetson-inference containers with the official JetPack-L4T image. Although until the CSV’s are added to your Yocto install, you would likely encounter this issue with other GPU applications unrelated to jetson-inference - you may also want to try running a sample from the CUDA toolkit like deviceQuery.