Deviating results with TensorRT

Hello everyone!

I am getting deviating results when trying to do inference with TensorRT. I have converted a TensorFlow (Keras) model according to the example from the NVIDIA DevBlog:

https://github.com/parallel-forall/code-samples/tree/master/posts/TensorRT-3.0

My goal is to deploy an application that uses TensorRT on the Jetson TX2. Unfortunately there is no Python API available for TensorRT on Jetson. I have modified this example to fit my needs:

https://github.com/NVIDIA-Jetson/tf_to_trt_image_classification

There are already small differences between the prediction output of the Keras model and the TensorRT (Python) results after conversion. When running the image classification application (C++) the results also differ from the first two. I honestly have no idea what causes these differences, but they are in part substantial.

A simple example can be found here:

https://github.com/fischermario/deeplearning-playground/tree/master/tf_tensorrt_inferencing

For testing purposes this is all run on a x64 host!

Log output of “train.py”:
https://gist.github.com/fischermario/303cd97252b05e6811ad0944e7f44321

Log output of “convert.py”:
https://gist.github.com/fischermario/bd84b9be02ec2a50ecf8472f3cae0b74

Log output of “test_keras.py”:
https://gist.github.com/fischermario/ef3bee494c454c14d9a7e28a5e515e05

Log output of “test_trt.py”:
https://gist.github.com/fischermario/f5139a0da37186c845104770545a3df7

Log output of “test_classifier.py”:
https://gist.github.com/fischermario/376b35be6af5f07ceda237b71bf90f0d

Thank you!

Greetings,
Mario

We created a new “Deep Learning Training and Inference” section in Devtalk to improve the experience for deep learning and accelerated computing, and HPC users:
https://devtalk.nvidia.com/default/board/301/deep-learning-training-and-inference-/

We are moving active deep learning threads to the new section.

URLs for topics will not change with the re-categorization. So your bookmarks and links will continue to work as earlier.

-Siddharth