Error running TensorRT


I have a problem running the the tensorRT docker image Are they supported on Tesla K80 GPUs and should i use only nvidia-docker?

Also, i have installed TensorRT The pure tensorrt module shows the correct version, but when i try to create an inference graph using the object detection example in the jupyter notebook ([url][/url]), i see the below message. I know that it’s meant for Jetson TX2, but is there a way to get around this?

INFO: tensorflow: Running against TensorRT version 0.0.0

Platform configuration -
Tesla K80 GPU
Ubuntu 16.04
CUDA v10.0
Tensorflow 1.12.0



you mentioned you are running “”, but that’s a tensorflow container, not a tensorrt container. Please consider using

NGC tensorrt container should work with Tesla K80.

Please use nvidia-docker or docker with nvidia-runtime.


I had the following similar warning when I was importing TensorRT from tensorflow.contrib:
INFO: tensorflow: Running against TensorRT version 0.0.0

I took the suggestion to run inside the TensorRT container ( and later installed python dependencies. However, when I am trying to use create_inference_graph from TensorRT, I am getting the following error:

AttributeError: module ‘tensorrt’ has no attribute ‘create_inference_graph’

I am surprised to see this error when running inside the TensorRT container. Please let me know what need to be done to use the create_inference_graph function.

Driver Version: 418.67