"TensorRT is not enabled!" with tf-trt

Linux distro and version: Ubuntu 18.04
GPU type: TITAN Xp
nvidia driver version: 410.78
CUDA version: 10.0
CUDNN version: 7.5.1
Python version: 3.6
Tensorflow version: 1.13.1
TensorRT version: 5.0.2.6

Hallo,

I am using tf-trt to do INT8 calibration with my U-Net model.

from tensorflow.contrib import tensorrt as tftrt

following the calibration sample from here:

https://developer.download.nvidia.com/devblogs/tftrt_sample.tar.xz?spm=a2c4e.11153940.blogcont579985.9.2c9030d0Z0Lock&file=tftrt_sample.tar.xz

ERROR Message:
/lib/python3.6/site-packages/tensorflow/contrib/tensorrt/python/trt_convert.py", line 428, in calib_graph_to_infer_graph
int(msg[0]))
tensorflow.python.framework.errors_impl. FailedPreconditionError: TensorRT is not enabled!

When I print the

compiled_version = get_linked_tensorrt_version()
  loaded_version = get_loaded_tensorrt_version()

from ~/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow/contrib/tensorrt/python/trt_convert.py

I got:
compiled_version (0, 0, 0)
loaded_version (0, 0, 0)

could you please tell me how to enable TensorRT?

Thanks,
SZ

P.S. When I run the tftrt_sample.py from https://devblogs.nvidia.com/tensorrt-integration-speeds-tensorflow-inference/, I got the same error.

Hello,

You are using a non-gpu TF or did not compile with TensorRT. Make sure you enable TensorRT when you build your TensorFlow.

Thanks.

Hello,

I am using tensorflow-gpu, could you please elaborate how to “enable TensorRT when I build TensorFlow” or how to “compile with TensorRT”, isn’t importing tf-trt(or alongwith tensorrt) enough?

PyCuda is also installed, if this is relevant.

thanks,
SZ

Hello,

If you build your TensorFlow from source, enable TensorRT in your configure. See: https://www.tensorflow.org/install/source#configure_the_build
Click on the “View sample configuration session”. You can see there is an option to enable TensorRT.

If you’re using pip install for tensorflow-gpu, you should have TF-TRT out of the box.
See: https://docs.nvidia.com/deeplearning/dgx/tf-trt-user-guide/index.html#prereqs

Thanks.
NEVSJ

hello,

I installed through conda:

conda install -c anaconda tensorflow-gpu

and I think tf-trt also just come with it, as I can:

from tensorflow.contrib import tensorrt as tftrt

Is there configuration that I should set in the Conda framework?

thanks,
SZ

Hello,

Try to repro the problem by installing tensorflow-gpu with conda and run the sample code
https://developer.download.nvidia.com/devblogs/tftrt_sample.tar.xz?spm=a2c4e.11153940.blogcont579985.9.2c9030d0Z0Lock&file=tftrt_sample.tar.xz.
Works fine for me.

Are you sure the tftrt that you import is referenced to the one you got from conda install tensorflow-gpu?

Also, to remove all the dependencies issues, we recommend you use our NGC containers.

yes the tftrt that I import is referenced to the one from conda installed tensorflow-gpu.

I didn’t resolve this tftrt problem, but TensorRT works for me. Thanks!

the same problem, how to solve it?