Hi,
I am trying to achieve a simple task, create two docker containers
a) container 1: which basically does the image capture from the camera and handle GPIOs to do the mechanical part of a robot I am building
b) Container 2: Do pure AI stuff, I plan to provide a rest-server in this container which could be called from container 1 to pass an image and returned the type of object identified.
For container 2, I plan to use any builtin container which uses nvidia gpu container runtime. I am running a yocto build using meta-tegra dunfell-l4t-r32.4.3
Tried using dustynv/jetson-inference:r32.4.3
but it has missing libraries
root@1fce794aad39: /jetson-inference/build/aarch64/binroot@1fce794aad39:/jetson-inference/build/aarch64/bin# ./imagenet images/jellyfish.jpg images/test/jellyfish.jpg
./imagenet: error while loading shared libraries: /usr/lib/aarch64-linux-gnu/libnvinfer.so.7: file too short
0;root@1fce794aad39: /jetson-inference/build/aarch64/binroot@1fce794aad39:/jetson-inference/build/aarch64/bin# ls -l /usr/lib/aarch64-linux-gnu/libnvinfer*
lrwxrwxrwx 1 root root 19 Oct 27 19:46 /usr/lib/aarch64-linux-gnu/libnvinfer.so -> libnvinfer.so.7.1.3
lrwxrwxrwx 1 root root 19 Oct 27 19:46 /usr/lib/aarch64-linux-gnu/libnvinfer.so.7 -> libnvinfer.so.7.1.3
-rw-r–r-- 1 root root 0 Jul 1 20:05 /usr/lib/aarch64-linux-gnu/libnvinfer.so.7.1.3
lrwxrwxrwx 1 root root 26 Oct 27 19:46 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so -> libnvinfer_plugin.so.7.1.3
lrwxrwxrwx 1 root root 26 Oct 27 19:46 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7 -> libnvinfer_plugin.so.7.1.3
-rw-r–r-- 1 root root 0 Jul 1 20:05 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.3
I have added this in my yocto conf
MAGE_INSTALL_append = " nvidia-docker cudnn tensorrt libvisionworks libvisionworks-sfm libvisionworks-tracking cuda-libraries"
Not sure what I am missing here…