tf_to_trt_image_classification with custom model

Hello everyone!

I wanted to feed a custom model (built with TensorFlow) to the converter and the classifier example here:

I have tried to do transfer learning on a pretrained VGG19 network (without top layers). After converting the model to TensorRT I have noticed (in part) substantial differences in the results when comparing TensorFlow and TensorRT. I have posted my example here:

https://devtalk.nvidia.com/default/topic/1032763/gpu-accelerated-libraries/deviating-results-with-tensorrt/

Has anyone tried something similar on the Jetson TX2?

Greetings,
Mario

Hi,

We have run VGG19 with TensorRT before but didn’t find any accuracy problem.
Before investigation, could you help to check if the input of TensorFlow and TensorRT is identical?

Thanks.

Hello!

Thanks for your reply! I have built a complete example to compare the results of TensorFlow (Keras), TensorRT (Python) and the image classification example (TensorRT/C++) here:

https://github.com/fischermario/deeplearning-playground/tree/master/tf_tensorrt_inferencing

The output of inferencing on the (in each case the same) example pictures can be seen here:

Log output of “test_keras.py”:
https://gist.github.com/fischermario/ef3bee494c454c14d9a7e28a5e515e05

Log output of “test_trt.py”:
https://gist.github.com/fischermario/f5139a0da37186c845104770545a3df7

Log output of “test_classifier.py”:
https://gist.github.com/fischermario/376b35be6af5f07ceda237b71bf90f0d

You will notice that there are not only differences in percentage (which could be neglected), but also in the predicted class itself (which is obviously a problem).

Please tell me what I did wrong.

Greetings,
Mario

Hi,

The input data of Keras and TensorRT are different.
It’s recommended to use the same python module for preprocessing(ex. ImageDataGenerator) and then compare the results.

Thanks.