Tensorflow Image Classification model with Triplet Loss accuracy drops when is translated to TensorRT on JetsonNano

Hi I have created a DL model using transfer learning and Triplet Loss, the base model is EfficienNetB0 with tensorflow. I have trained the model to generate embeddings from car images with the goal to recognize images from a car taken from different cameras, computing the euclidean distance between those embeddings.
To translate the model to TensorRT I’m using TF-TRT.

This is the structure of my model in TensorFlow:

model = Sequential([
layers.Lambda(preprocess_input, name=‘preprocessing’, input_shape=(img_width,img_height,3)),
layers.Dense(128, activation=None),
layers.Lambda(lambda x: tf.math.l2_normalize(x, axis=1)) # L2 normalize embeddings

An this is how I prepare the tensor linked to the image to be evaluated by TensorRT model:
from tensorflow.keras.applications.efficientnet import preprocess_input

def load_image(img_path, show=False):

img = ImageKeras.load_img(img_path, target_size=(180, 180))
img_tensor = ImageKeras.img_to_array(img)                    # (height, width, channels)
img_tensor = np.expand_dims(img_tensor, axis=0)        
img_tensor = preprocess_input(img_tensor)

return tf.constant(img_tensor)

I have a couple of test with images from cars taken from different cameras, and when computing the distance between those vectors that represent images, the closest ones should be from the same car.
In Tensorflow I have achieved an accuracy for one test case of 88%, which drops to 79% on the Jetson.
Moreover, in another test case, with Tensorflow I have achieved an accuracy of 86% which decreases to 80% on the Jetson.
I’m working with precision FP32.

I’m performing the conversion and inference inside a docker container with base image:

I look forward for your answers, I have been stuck with this for a couple of days.
Best Regards,


In Tensorflow I have achieved an accuracy for one test case of 88%, which drops to 79% on the Jetson.

May I know the 88% accuracy is tested on Jetson or a desktop GPU?
If it is not measured on Jetson, please give it a try.


On desktop, the accuracy drops to 79% on Jetson, the accuracy is built computing the distances between the embeddings from the last layer of the model. The embeddings from TensorRT model are not as good as the one from Tensorflow model.

Here is the code that transforms the model to TensorRT with TF-TRT, and the scripts that evaluate it, with the test cases.
I really appreciate your help, I’ve been stuckNVIDIA.zip (22.0 MB) with this a couple of days.


Since you can also run pure TensorFlow on Jetson with the package shared below:

Could you check if there is any accuracy drop when running TensorFlow on Nano?
This can help us check the regression comes from TensorRT integration or Jetson hardware.