I do a lot of development on the NGC L4T images, especially L4T base. Is there a road map for the equivalent images with JetPack 5.0 developer preview? For my applications, a PyTorch image would be ideal, but since there are PyTorch (and TensorFlow) wheels for JetPack 45.0 already I can make the base image work.
Thanks for asking, I will pass this question to internal team to get their plan to share.
That L4T image should be live, could you help to get it a try? Thanks
Does it require host to be running JP 5.0 as well, or host can be on any JP versions?
nvcr.io/nvidia/l4t-tensorflow:r34.1.0-tf2.8-py3 on host running JP 4.6.1 and saw those errors:
2022-04-20 06:44:28.942276: W tensorflow/stream_executor/platform/default/dso_loader.cc:65] Could not load dynamic library 'libnvinfer.so.8'; dlerror: libnvmedia_tensor.so: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64:/usr/local/cuda/lib64: 2022-04-20 06:44:28.944929: W tensorflow/stream_executor/platform/default/dso_loader.cc:65] Could not load dynamic library 'libnvinfer_plugin.so.8'; dlerror: libnvmedia_tensor.so: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64:/usr/local/cuda/lib64: 2022-04-20 06:44:28.945003: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:35] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly. ERROR:tensorflow:Tensorflow needs to be built with TensorRT support enabled to allow TF-TRT to operate.
but I do see those files:
# cd /usr/lib/aarch64-linux-gnu/ root@a12dfa30891d:/usr/lib/aarch64-linux-gnu# ls -l libnvinfer* lrwxrwxrwx 1 root root 19 Mar 18 01:28 libnvinfer.so -> libnvinfer.so.8.4.0 lrwxrwxrwx 1 root root 19 Apr 20 06:44 libnvinfer.so.8 -> libnvinfer.so.8.4.0 -rw-r--r-- 1 root root 164944232 Nov 17 09:16 libnvinfer.so.8.2.1 -rw-r--r-- 1 root root 466597112 Mar 18 01:28 libnvinfer.so.8.4.0 -rw-r--r-- 1 root root 30228664 Nov 17 09:16 libnvinfer_builder_resource.so.8.2.1 -rw-r--r-- 1 root root 238847208 Mar 18 01:28 libnvinfer_builder_resource.so.8.4.0 lrwxrwxrwx 1 root root 26 Mar 18 01:28 libnvinfer_plugin.so -> libnvinfer_plugin.so.8.4.0 lrwxrwxrwx 1 root root 26 Apr 20 06:44 libnvinfer_plugin.so.8 -> libnvinfer_plugin.so.8.4.0 -rw-r--r-- 1 root root 15459744 Nov 17 09:16 libnvinfer_plugin.so.8.2.1 -rw-r--r-- 1 root root 27648144 Mar 18 01:28 libnvinfer_plugin.so.8.4.0 -rw-r--r-- 1 root root 30526646 Mar 18 01:28 libnvinfer_plugin_static.a -rw-r--r-- 1 root root 1177055458 Mar 18 01:28 libnvinfer_static.a root@a12dfa30891d:/usr/lib/aarch64-linux-gnu# root@a12dfa30891d:/usr/lib/aarch64-linux-gnu# ls -l /usr/lib/aarch64-linux-gnu/tegra/libnvmedia.so* -rw-r--r-- 1 root root 376992 Feb 19 17:03 /usr/lib/aarch64-linux-gnu/tegra/libnvmedia.so root@a12dfa30891d:/usr/lib/aarch64-linux-gnu# echo $LD_LIBRARY_PATH /usr/local/cuda/lib64:/usr/local/cuda/lib64: root@a12dfa30891d:/usr/lib/aarch64-linux-gnu#
Hi @user100090, this container is for JetPack 5.0 DP (L4T R34.1.0) - since you are on JP 4.6.1 (L4T R32.7.1), please use the
l4t-tensorflow:r32.7.1-tf2.7-py3 container instead
Thanks for the reply! I was hoping the container image would give us the benefit of running different JP version containers from the host JP version. In the normal container world, I could run any Linux version containers on my host machine that runs MacOS. There is really no requirement that container and host OS needs to match. Thanks for pointing this limitation out, and this really limits the use case for containers in Jetson.
That is the idea going forward with containers built for JetPack 5.0 and onwards (these containers have CUDA/ect built into the containers, which is why they are larger), however since you are on JetPack 4.6.1 this still uses the previous mounting-based method.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.