Deviating results with TensorRT

Hello everyone!

I am getting deviating results when trying to do inference with TensorRT. I have converted a TensorFlow (Keras) model according to the example from the NVIDIA DevBlog:

My goal is to deploy an application that uses TensorRT on the Jetson TX2. Unfortunately there is no Python API available for TensorRT on Jetson. I have modified this example to fit my needs:

There are already small differences between the prediction output of the Keras model and the TensorRT (Python) results after conversion. When running the image classification application (C++) the results also differ from the first two. I honestly have no idea what causes these differences, but they are in part substantial.

A simple example can be found here:

For testing purposes this is all run on a x64 host!

Log output of “train.py”:

Log output of “convert.py”:

Log output of “test_keras.py”:

Log output of “test_trt.py”:

Log output of “test_classifier.py”:

Thank you!

Greetings,
Mario

We created a new “Deep Learning Training and Inference” section in Devtalk to improve the experience for deep learning and accelerated computing, and HPC users:
https://devtalk.nvidia.com/default/board/301/deep-learning-training-and-inference-/

We are moving active deep learning threads to the new section.

URLs for topics will not change with the re-categorization. So your bookmarks and links will continue to work as earlier.

-Siddharth