tf_to_trt_image_classification with custom model

Hello everyone!

I wanted to feed a custom model (built with TensorFlow) to the converter and the classifier example here:

I have tried to do transfer learning on a pretrained VGG19 network (without top layers). After converting the model to TensorRT I have noticed (in part) substantial differences in the results when comparing TensorFlow and TensorRT. I have posted my example here:

https://devtalk.nvidia.com/default/topic/1032763/gpu-accelerated-libraries/deviating-results-with-tensorrt/

Has anyone tried something similar on the Jetson TX2?

Greetings,
Mario

Hi,

We have run VGG19 with TensorRT before but didn’t find any accuracy problem.
Before investigation, could you help to check if the input of TensorFlow and TensorRT is identical?

Thanks.

Hello!

Thanks for your reply! I have built a complete example to compare the results of TensorFlow (Keras), TensorRT (Python) and the image classification example (TensorRT/C++) here:

The output of inferencing on the (in each case the same) example pictures can be seen here:

Log output of “test_keras.py”:

Log output of “test_trt.py”:

Log output of “test_classifier.py”:

You will notice that there are not only differences in percentage (which could be neglected), but also in the predicted class itself (which is obviously a problem).

Please tell me what I did wrong.

Greetings,
Mario

Hi,

The input data of Keras and TensorRT are different.
It’s recommended to use the same python module for preprocessing(ex. ImageDataGenerator) and then compare the results.

Thanks.