Error Importing tensorrt

I want to use tensorrt to convert a tf model to uff.

I got CUDA 9 and cuDNN 7 installed on my x86 Host Machine, i installed TensorRT and pyCuda according to the official installation guide using the debian: http://developer2.download.nvidia.com/compute/machine-learning/tensorrt/secure/3.0/ga/TensorRT-Installation-Guide.pdf?EjPPsmXia1_6e64BqMgr4lvVg5x5wxt4rRsF0xY80vveGy6Vr7Ds6cXBjzwGzTO74mxnw-mXs6LoZpq_1T3cZnlmjXRZuIBRxTtqmS2CN54S6L8lE_LBVOZW2fhnoyaoh-IIgK-vQqoQk8HdehJuFBvTsEZsI_CayclUDD4UmAjLsg2H7yFRKYUDgowx3A

But when i run the Python API i get following Error:

File "/home/gustav/workspace/TensorRT/tf_to_trt.py", line 10, in <module>
    import tensorrt as trt

  File "/usr/lib/python2.7/dist-packages/tensorrt/__init__.py", line 53, in <module>
    from tensorrt import __versions__

ImportError: cannot import name __versions__

Any Solutions to that? Inside the versions file stands following:

'''
Version information for the TensorRT Python API
'''
package_version = '3.0.1'
infer_lib_version = 301
so_version = '4.0.1'
cudnn_version = '7.0'

Hi gustavvz, are you still experiencing the issue, since you more recently posted this topic about using the UFF converter?

https://devtalk.nvidia.com/default/topic/1028464/jetson-tx2/converting-tf-model-to-tensorrt-uff-format/

If you did something to resolve your environment of the ImportError, can you post it to this thread? Thanks!

Hey dusty_nv, yes i found a very simple solution as it is written in your TensorRT Release Notes (maybe this usefull file should be highlighted a little bit more :) )

You have to import uff and tensorflow before you import tensorrrt (even if you dont use tensorflow in the code)

The reason is said to be:

“When using the TensorRT APIs from Python, import the tensorflow and uff
modules before importing the tensorrt module. This is required to avoid a
potential namespace conflict with the protobuf library as well as the cuDNN
version. In a future update, the modules will be fixed to allow the loading of these
Python modules to be in an arbitrary order.”

Hi,
I have imported uff and tensorflow as mentioned above, yet I am facing this error:

import tensorflow as tf
import uff
import tensorrt
Traceback (most recent call last):
File “”, line 1, in
ImportError: No module named tensorrt

kindly help.
Thanks

Did you already install tensorRT?

Hi gustavvz,
Yes I have installed tensorRT from the link.
https://developer.nvidia.com/nvidia-tensorrt-download

Yes, I met the same issue with gustavvz.

Hi,

TensorRT python API doesn’t support Jetson platform:
http://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#tensorrtworkflow

Do you meet this error on a x86 Linux environment?

Thanks.

I’m getting ‘ModuleNotFoundError’ error. I’ve ubuntu 16.04 and my GPU is TitanX (Pascal). I tried what pratosha suggested by first importing tesnorflow and uff. But, I can’t even import uff. I installed nv-tensorrt-repo-ubuntu1604-ga-cuda9.0-trt3.0.4-20180208_1-1_amd64.deb and folllowed instructions provided by NVIDIA (Debian Installation): http://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html#installing and I verify that it’s been installed.

Other information that might help:

  • Python 3.6.4
  • Cuda 9.0
  • cuDNN 7.0.5

Thanks

Hi,

For python API and UFF parser, please install with this command:

$ sudo apt-get install python3-libnvinfer-doc
$ sudo apt-get install uff-converter-tf

You can find more information in our installation guide:
http://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html#installing-debian

Thanks.

Hi AastaLLL,

Thank you for your reply. I ran those commands and when I run dpkg -l | grep TensorRT I see the output that is provided at section 2 of the installation guide. But, it’s not working! My NVidia driver version is 384.111. Do you think that’s causing problem?! Do I need to set a specific environment variable? My environment variables are:

[i]# added by Anaconda3 installer
export PATH="/home/administrator/anaconda3/bin:$PATH"

added by Nejla

export CUDA_ROOT=/usr/local/cuda-9.0
export PATH=/usr/local/cuda-9.0/bin${PATH:+:{PATH}} export LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64{LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
export CUDA_HOME=/usr/local/cuda-9.0
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/extras/CUPTI/lib64 [/i]

More information:

  • NVidia driver version: 384.111
  • Anaconda Python 3.6.4
  • Cuda 9.0
  • cuDNN 7.0.5
  • gcc 4.8
  • swig installed
  • “import pycuda” works

Thank you. I finally got it working. I’d three python installations: python 2.7 coming with ubuntu, python 3.5 coming with ubuntu and anaconda python 3.5 that I’d installed myself. Got rid of Anaconda 3.5 as I couldn’t control where tensorrt is installed and made 3.5 the default one and reinstalled tensorrt, python packages as well as pycuda. Now, it’s working.

Hi,

Good to know this : )

Thanks to update status with us.

Any alternative to the solution, I do not want to remove Anaconda.

Hi,

Guess the simplest way to achieve this is to use Anacona Python 2.7.
https://conda.io/docs/user-guide/install/download.html

Thanks.

Tried it, doesn’t work.

hi atul,

Unfortunately, you can’t use anaconda with TensorRT but you can still have anaconda on your system. To make tensorRT work I think it’s better to figure out what version of Python the tensorRT has been installed under and then make that version your default version and install other libraries under that version (tensorflow, …). You don’t have control where to install tensorRT but you’ve control on everything else.

Hope that helps!

Hi,

I met the same problem when import tensorrt in a python interpreter。

I updated the numpy using: pip install -U numpy
then imported tf and uff before tensorrt.
it works.

versions:
python 2.7.12
updated numpy 1.14.5 / pre version 1.13.3
linux Ubuntu 16.04.3
tensorrt 4
cuda 8.0

Hi,

Please remember that TensorRT python API is only available on x86 environment. It also requires PyCUDA installed first.

You can follow this page to install PyCUDA.
https://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html#installing-pycuda

After that, please follow this page to install TensorRT.
https://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html#installing

Thanks.

Hi there AastaLLL,
I’m having sort of the same issue, but with CUDA 9.2 and TensorRT 4.
I have numpy version 1.11 on Ubuntu 16.04 x86
When trying to import tensorrt with:

import tensorrt as trt

I get:

ImportError: numpy.core.multiarray failed to import

Then I import numpy.core.multiarray and try again and this time I get:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python2.7/dist-packages/tensorrt/__init__.py", line 53, in <module>
    from tensorrt import __versions__
ImportError: cannot import name __versions__

As I’ve seen in this thread I have to import tensorflow and uff first. I had tensorflow for CUDA 9.0 so I gave it a try but can’t import uff or tensorflow due to:

ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory

Should I reinstall tensorflow with CUDA 9.2 support? I think for that I’ll have to build from source, as shown here: http://www.python36.com/how-to-install-tensorflow-gpu-with-cuda-9-2-for-python-on-ubuntu

Or is there other way to import tensorrt without having to install tensorflow.
Also, I installed TensorRT with Cuda9.2 support.
Thanks for your time!