Problem with TensorRT 4.0.3 (sdkmanager)

I have a problem concerning the samples in the sdkmanager.

After I installed the sdkmanger successfully I wanted to run some deep-learning-samples. But when doing so there appeared an error:

" TensorRT Library mismatch, expected version got version 4.0.3"

(the TensorRT version 4.0.3 was delivered and installed with the sdkmanager) So I cannot run the samples which need tensorrt and now I also get the error when i want to import tensorrt in python.

I tried:

  1. uninstalled tensorrt
    • then when i want to import tensorrt, it couldn’t be found
  2. reinstalled tensorrt
    • same error “TensorRT Library mismatch…”
  3. I also wanted to try uninstall tensorrt 4.0.3 but this wasn’t possible because there was no location found where it could be installed.

Thus I cannot use tensorrt on my computer right now and also the samples from the sdkmanger don’t work!

Now i don’t know what to do next, maybe you can help me.
Thank you a lot!

Dear hendrik.vogt,
sdkmanager installs TRT at /usr/local/cuda/dl. The host TRT samples are at /usr/local/cuda/dl/target/x86_64-linux-gnu/samples. The tensorRT samples in this location can be compiled on host using the provided make file and works out of the box.

If you install another TRT version, please set the paths correctly to choose your required TRT path.

Dear Siva,

I set the path for tensorrt v in my bash file as follows:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/python2.7/dist-packages/tensorrt/

However, I still receive the same error of mismatch.

Taking into consideration, that tensorrt 4.0.3 does not exist in any location as an installed package, only the whl file is existing in


Thanks a lot

Also, when I run

dpkg -l | grep TensorRT

I get this

ii  graphsurgeon-tf                                            4.1.2-1+cuda8.0                                       amd64        GraphSurgeon for TensorRT package
ii  libnvinfer-dev                                             4.1.2-1+cuda8.0                                       amd64        TensorRT development libraries and headers
ii  libnvinfer-samples                                         4.1.2-1+cuda8.0                                       amd64        TensorRT samples and documentation
ii  libnvinfer4                                                4.1.2-1+cuda8.0                                       amd64        TensorRT runtime libraries
ii  python-libnvinfer                                          4.1.2-1+cuda8.0                                       amd64        Python bindings for TensorRT
ii  python-libnvinfer-dev                                      4.1.2-1+cuda8.0                                       amd64        Python development package for TensorRT
ii  tensorrt                                                                              amd64        Meta package of TensorRT
ii  uff-converter-tf                                           4.1.2-1+cuda8.0                                       amd64        UFF converter for TensorRT package

which shows technically, that I do not have 4.0.3 version installed on my machine … so I don’t understand why do I receive this mismatch

    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/
    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64/
    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/python2.7/dist-packages/tensorrt/
    export LD_LIBRARY_PATH=/usr/local/cuda/targets/aarch64-linux/lib:$LD_LIBRARY_PATH

Deat HV_ZF,
Could you confirm you are trying to run tensorRT C++ sample and encountered this error? or you are trying to use python API library and encountered this? It is confusing what deep-learning-samples you are referring in the post. Also, could you please check for available tensorRT files/libraries using locate command instead of dpkg. It is good idea to check library dependencies for your executable using ldd.

I installed the sdkmanager to run some deep learning samples (which are delivered with and integrated in the sdkmanager) and some of the samples did not work because of the problem with the tensorrt version, like I mentioned above.
So what I do next was to try to import tensorRT in a simple python script, just to verify tensorRT is working, as it was before installing the sdkmanager. But then I received the mentioned problem(tensorrt vs 4.0.3).
So I am not able to run the samples from the sdkmanager and I am also not able to use tensorrt in general in python on my computer, because of the same error.

so I searched for trt so libraries by

locate trt
locate tensorrt

and this is what I got


so I did

ldd /.local/lib/python2.7/site-packages/tensorflow/contrib/tensorrt/

I got =>  (0x00007ffd02795000) => /home/z637177/.local/lib/python2.7/site-packages/tensorflow/contrib/tensorrt/../../ (0x00007f169602e000) => /lib/x86_64-linux-gnu/ (0x00007f1695d25000) => /lib/x86_64-linux-gnu/ (0x00007f1695b21000) => /lib/x86_64-linux-gnu/ (0x00007f1695904000) => /usr/lib/x86_64-linux-gnu/ (0x00007f1695582000) => /lib/x86_64-linux-gnu/ (0x00007f169536c000) => /lib/x86_64-linux-gnu/ (0x00007f1694fa2000)
	/lib64/ (0x00007f1697007000)


ldd /.local/lib/python2.7/site-packages/tensorflow/contrib/tensorrt/python/ops/

I got =>  (0x00007fffab381000) => /home/z637177/.local/lib/python2.7/site-packages/tensorflow/contrib/tensorrt/python/ops/../../../../ (0x00007fe152535000) => /lib/x86_64-linux-gnu/ (0x00007fe15222c000) => /usr/lib/x86_64-linux-gnu/ (0x00007fe151eaa000) => /lib/x86_64-linux-gnu/ (0x00007fe151c94000) => /lib/x86_64-linux-gnu/ (0x00007fe1518ca000)
	/lib64/ (0x00007fe153215000) => /lib/x86_64-linux-gnu/ (0x00007fe1516c6000) => /lib/x86_64-linux-gnu/ (0x00007fe1514a9000)

Dear HV_ZF,
As I understand you are trying to access tensorRT Python. Have you installed tensorRT python API using whl file provided at /usr/local/cuda-10.0/dl/python folder? If not can you please do that and link correct libraries in LD_LIBRARY_PATH before importing tensorRT in python window


I reinstalled tensorRT using the .tar version instead of the .deb so I can have more controll over the path and what files are referenced.

But thank you for your help!