Hi I have created a DL model using transfer learning and Triplet Loss, the base model is EfficienNetB0 with tensorflow. I have trained the model to generate embeddings from car images with the goal to recognize images from a car taken from different cameras, computing the euclidean distance between those embeddings.
To translate the model to TensorRT I’m using TF-TRT.
This is the structure of my model in TensorFlow:
model = Sequential([
layers.Lambda(preprocess_input, name=‘preprocessing’, input_shape=(img_width,img_height,3)),
layers.Lambda(lambda x: tf.math.l2_normalize(x, axis=1)) # L2 normalize embeddings
An this is how I prepare the tensor linked to the image to be evaluated by TensorRT model:
from tensorflow.keras.applications.efficientnet import preprocess_input
def load_image(img_path, show=False):
img = ImageKeras.load_img(img_path, target_size=(180, 180)) img_tensor = ImageKeras.img_to_array(img) # (height, width, channels) img_tensor = np.expand_dims(img_tensor, axis=0) img_tensor = preprocess_input(img_tensor) return tf.constant(img_tensor)
I have a couple of test with images from cars taken from different cameras, and when computing the distance between those vectors that represent images, the closest ones should be from the same car.
In Tensorflow I have achieved an accuracy for one test case of 88%, which drops to 79% on the Jetson.
Moreover, in another test case, with Tensorflow I have achieved an accuracy of 86% which decreases to 80% on the Jetson.
I’m working with precision FP32.
I’m performing the conversion and inference inside a docker container with base image:
I look forward for your answers, I have been stuck with this for a couple of days.