Looking through the different containers I’ve noticed that “nvcr.io/nvidia/deepstream:5.1-21.02-triton” is not actually built against TensorRT 21.02 (7.2.2). It’s using TensorRT 7.2.1.
The other containers “nvcr.io/nvidia/deepstream:5.1-21.02-devel” and “nvcr.io/nvidia/deepstream:5.1-21.02-base” are built against 21.02 (7.2.2).
Example:
-triton:
root@4d3b64ad616c:/# ls -al /usr/lib/x86_64-linux-gnu/libnvinfer.so*
lrwxrwxrwx 1 root root 19 Oct 6 2020 /usr/lib/x86_64-linux-gnu/libnvinfer.so -> libnvinfer.so.7.2.1
lrwxrwxrwx 1 root root 19 Oct 6 2020 /usr/lib/x86_64-linux-gnu/libnvinfer.so.7 -> libnvinfer.so.7.2.1
-rw-r--r-- 1 root root 625274176 Oct 6 2020 /usr/lib/x86_64-linux-gnu/libnvinfer.so.7.2.1
-base and -devel:
root@76d24f0909e3:/# ls -al /usr/lib/x86_64-linux-gnu/libnvinfer.so*
lrwxrwxrwx 1 root root 19 Dec 11 2020 /usr/lib/x86_64-linux-gnu/libnvinfer.so -> libnvinfer.so.7.2.2
lrwxrwxrwx 1 root root 19 Dec 11 2020 /usr/lib/x86_64-linux-gnu/libnvinfer.so.7 -> libnvinfer.so.7.2.2
-rw-r--r-- 1 root root 625245504 Dec 11 2020 /usr/lib/x86_64-linux-gnu/libnvinfer.so.7.2.2
This is causing my converted models to fail on the triton image:
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: INVALID_CONFIG: The engine plan file is not compatible with this version of TensorRT, expecting library version 7.2.1 got 7.2.2, please rebuild.