I have a problem running the the tensorRT docker image nvcr.io/nvidia/tensorflow:18.07-py3. Are they supported on Tesla K80 GPUs and should i use only nvidia-docker?
Also, i have installed TensorRT 5.0.2.6. The pure tensorrt module shows the correct version, but when i try to create an inference graph using the object detection example in the jupyter notebook ([url]https://github.com/NVIDIA-AI-IOT/tf_trt_models[/url]), i see the below message. I know that it’s meant for Jetson TX2, but is there a way to get around this?
INFO: tensorflow: Running against TensorRT version 0.0.0
Platform configuration -
Tesla K80 GPU
Ubuntu 16.04
CUDA v10.0
Tensorflow 1.12.0
I had the following similar warning when I was importing TensorRT from tensorflow.contrib:
INFO: tensorflow: Running against TensorRT version 0.0.0
I took the suggestion to run inside the TensorRT container (nvcr.io/nvidia/tensorrt:19.01-py3) and later installed python dependencies. However, when I am trying to use create_inference_graph from TensorRT, I am getting the following error:
AttributeError: module ‘tensorrt’ has no attribute ‘create_inference_graph’
I am surprised to see this error when running inside the TensorRT container. Please let me know what need to be done to use the create_inference_graph function.