Error Importing tensorrt

I want to use tensorrt to convert a tf model to uff.

I got CUDA 9 and cuDNN 7 installed on my x86 Host Machine, i installed TensorRT and pyCuda according to the official installation guide using the debian: http://developer2.download.nvidia.com/compute/machine-learning/tensorrt/secure/3.0/ga/TensorRT-Installation-Guide.pdf?EjPPsmXia1_6e64BqMgr4lvVg5x5wxt4rRsF0xY80vveGy6Vr7Ds6cXBjzwGzTO74mxnw-mXs6LoZpq_1T3cZnlmjXRZuIBRxTtqmS2CN54S6L8lE_LBVOZW2fhnoyaoh-IIgK-vQqoQk8HdehJuFBvTsEZsI_CayclUDD4UmAjLsg2H7yFRKYUDgowx3A

But when i run the Python API i get following Error:

File "/home/gustav/workspace/TensorRT/tf_to_trt.py", line 10, in <module>
    import tensorrt as trt

  File "/usr/lib/python2.7/dist-packages/tensorrt/__init__.py", line 53, in <module>
    from tensorrt import __versions__

ImportError: cannot import name __versions__

Any Solutions to that? Inside the versions file stands following:

'''
Version information for the TensorRT Python API
'''
package_version = '3.0.1'
infer_lib_version = 301
so_version = '4.0.1'
cudnn_version = '7.0'

Hi gustavvz, are you still experiencing the issue, since you more recently posted this topic about using the UFF converter?

[url]Converting TF Model to TensorRT UFF Format - Jetson TX2 - NVIDIA Developer Forums

If you did something to resolve your environment of the ImportError, can you post it to this thread? Thanks!

Hey dusty_nv, yes i found a very simple solution as it is written in your TensorRT Release Notes (maybe this usefull file should be highlighted a little bit more :) )

You have to import uff and tensorflow before you import tensorrrt (even if you dont use tensorflow in the code)

The reason is said to be:

ā€œWhen using the TensorRT APIs from Python, import the tensorflow and uff
modules before importing the tensorrt module. This is required to avoid a
potential namespace conflict with the protobuf library as well as the cuDNN
version. In a future update, the modules will be fixed to allow the loading of these
Python modules to be in an arbitrary order.ā€

Hi,
I have imported uff and tensorflow as mentioned above, yet I am facing this error:

import tensorflow as tf
import uff
import tensorrt
Traceback (most recent call last):
File ā€œā€, line 1, in
ImportError: No module named tensorrt

kindly help.
Thanks

Did you already install tensorRT?

Hi gustavvz,
Yes I have installed tensorRT from the link.
https://developer.nvidia.com/nvidia-tensorrt-download

Yes, I met the same issue with gustavvz.

Hi,

TensorRT python API doesnā€™t support Jetson platform:
[url]Developer Guide :: NVIDIA Deep Learning TensorRT Documentation

Do you meet this error on a x86 Linux environment?

Thanks.

Iā€™m getting ā€˜ModuleNotFoundErrorā€™ error. Iā€™ve ubuntu 16.04 and my GPU is TitanX (Pascal). I tried what pratosha suggested by first importing tesnorflow and uff. But, I canā€™t even import uff. I installed nv-tensorrt-repo-ubuntu1604-ga-cuda9.0-trt3.0.4-20180208_1-1_amd64.deb and folllowed instructions provided by NVIDIA (Debian Installation): [url]Installation Guide :: NVIDIA Deep Learning TensorRT Documentation and I verify that itā€™s been installed.

Other information that might help:

  • Python 3.6.4
  • Cuda 9.0
  • cuDNN 7.0.5

Thanks

Hi,

For python API and UFF parser, please install with this command:

$ sudo apt-get install python3-libnvinfer-doc
$ sudo apt-get install uff-converter-tf

You can find more information in our installation guide:
http://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html#installing-debian

Thanks.

Hi AastaLLL,

Thank you for your reply. I ran those commands and when I run dpkg -l | grep TensorRT I see the output that is provided at section 2 of the installation guide. But, itā€™s not working! My NVidia driver version is 384.111. Do you think thatā€™s causing problem?! Do I need to set a specific environment variable? My environment variables are:

[i]# added by Anaconda3 installer
export PATH=ā€œ/home/administrator/anaconda3/bin:$PATHā€

added by Nejla

export CUDA_ROOT=/usr/local/cuda-9.0
export PATH=/usr/local/cuda-9.0/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
export CUDA_HOME=/usr/local/cuda-9.0
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/extras/CUPTI/lib64 [/i]

More information:

  • NVidia driver version: 384.111
  • Anaconda Python 3.6.4
  • Cuda 9.0
  • cuDNN 7.0.5
  • gcc 4.8
  • swig installed
  • ā€œimport pycudaā€ works

Thank you. I finally got it working. Iā€™d three python installations: python 2.7 coming with ubuntu, python 3.5 coming with ubuntu and anaconda python 3.5 that Iā€™d installed myself. Got rid of Anaconda 3.5 as I couldnā€™t control where tensorrt is installed and made 3.5 the default one and reinstalled tensorrt, python packages as well as pycuda. Now, itā€™s working.

1 Like

Hi,

Good to know this : )

Thanks to update status with us.

Any alternative to the solution, I do not want to remove Anaconda.

Hi,

Guess the simplest way to achieve this is to use Anacona Python 2.7.
[url]https://conda.io/docs/user-guide/install/download.html[/url]

Thanks.

Tried it, doesnā€™t work.

hi atul,

Unfortunately, you canā€™t use anaconda with TensorRT but you can still have anaconda on your system. To make tensorRT work I think itā€™s better to figure out what version of Python the tensorRT has been installed under and then make that version your default version and install other libraries under that version (tensorflow, ā€¦). You donā€™t have control where to install tensorRT but youā€™ve control on everything else.

Hope that helps!

Hi,

I met the same problem when import tensorrt in a python interpreter怂

I updated the numpy usingļ¼š pip install -U numpy
then imported tf and uff before tensorrt.
it works.

versions:
python 2.7.12
updated numpy 1.14.5 / pre version 1.13.3
linux Ubuntu 16.04.3
tensorrt 4
cuda 8.0

Hi,

Please remember that TensorRT python API is only available on x86 environment. It also requires PyCUDA installed first.

You can follow this page to install PyCUDA.
[url]https://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html#installing-pycuda[/url]

After that, please follow this page to install TensorRT.
[url]https://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html#installing[/url]

Thanks.

Hi there AastaLLL,
Iā€™m having sort of the same issue, but with CUDA 9.2 and TensorRT 4.
I have numpy version 1.11 on Ubuntu 16.04 x86
When trying to import tensorrt with:

import tensorrt as trt

I get:

ImportError: numpy.core.multiarray failed to import

Then I import numpy.core.multiarray and try again and this time I get:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python2.7/dist-packages/tensorrt/__init__.py", line 53, in <module>
    from tensorrt import __versions__
ImportError: cannot import name __versions__

As Iā€™ve seen in this thread I have to import tensorflow and uff first. I had tensorflow for CUDA 9.0 so I gave it a try but canā€™t import uff or tensorflow due to:

ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory

Should I reinstall tensorflow with CUDA 9.2 support? I think for that Iā€™ll have to build from source, as shown here: http://www.python36.com/how-to-install-tensorflow-gpu-with-cuda-9-2-for-python-on-ubuntu

Or is there other way to import tensorrt without having to install tensorflow.
Also, I installed TensorRT with Cuda9.2 support.
Thanks for your time!