Serialize the Tensors Engine in my python code to do inference using TensorFlow

Hi NVIDIA Developers :
@AastaLLL @NVES

I have developped a python code which performs the inference of images using my own training model. I have done Transfer Learning from SSD-MobiletNet-v2.
I started from a saved_model .pb that I optimized using TF-TRT in dynamic mode. That means that the optimization isn’t complete and that Tensor Engines are only build during the launch of inference.
You could find bellow the python code that i’m using for doing inference:

import tensorflow as tf
import numpy as np
from PIL import Image
import warnings
import time
import os
import pathlib
warnings.filterwarnings('ignore')

tf.compat.v1.enable_eager_execution()
config = tf.compat.v1.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.2     #Allocation of memory space dedicated to the GPU
session = tf.compat.v1.Session(config=config)


# Patch the location of gfile
tf.gfile = tf.io.gfile

def run_inference_for_single_image(model, image):
    # Image recovery and transformation into array
    image = np.asarray(np.array(Image.open(image_path)))

    # The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
    input_tensor = tf.convert_to_tensor(image)

    # The model expects a batch of images, so add an axis with `tf.newaxis`.
    input_tensor = input_tensor[tf.newaxis,...]

    # Run inference
    model_fn = model.signatures['serving_default']
    output_dict = model_fn(input_tensor)  # line that makes the inference

    # All outputs are batches tensors.
    # Convert to numpy arrays, and take index [0] to remove the batch dimension.
    # We're only interested in the first num_detections.
    num_detections = int(output_dict.pop('num_detections'))
    output_dict = {key:value[0, :num_detections].numpy()
                   for key,value in output_dict.items()}
    output_dict['num_detections'] = num_detections

    # detection_classes should be ints.
    output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)

    return output_dict

print("========================================================")
print('Loading model...', end='')

# Load saved model and build the detection function
detection_model = tf.compat.v2.saved_model.load(r"path_to_saved_model")

print("========================================================")
print("Fin de chargement du modele - Recuperation des images")

PATH_TO_TEST_IMAGES_DIR = pathlib.Path(r"path_to_images_repertory")
TEST_IMAGE_PATHS = sorted(list(PATH_TO_TEST_IMAGES_DIR.glob("image.png")))
print(TEST_IMAGE_PATHS)


print("Starting Inference - warm up ")
for image_path in TEST_IMAGE_PATHS:
	out = run_inference_for_single_image(detection_model, image_path) 


print("Starting Inferenc")
for i in range (1000):
	for image_path in TEST_IMAGE_PATHS:
    		out = run_inference_for_single_image(detection_model, image_path)
	print(i)

print("\n########################################### Resultats ###########################################\n")
print("Boxes number : ", out["num_detections"])
print("Confidence score: ", out["raw_detection_scores"])
print("Boxes class : ", out["detection_classes"])
print("Boxes coordinates : ", out["detection_boxes"])
print("\n###############################################################################################\n")

I have a big RAM problem when launching this code (freeze of the Jetson). After some research, I realized that it was partly the creation of the engines that took a lot of RAM. That’s why I would like to serialize them, that means transforming the engine into a file to store and use it at a later time for inference. To use for inference, I would simply deserialize the engine.
Thus, I would like to add a part of code to serialize the Tensor Engines but i don’t know how and where do it. I know that the Tensor Engines are created at the time of the inference but I don’t know how to retrieve them.
Furthermore, all my input images have the same dimensions (320x240), so the serialize engines should be reused without any problems.
I have saw this documentation : Developer Guide :: NVIDIA Deep Learning TensorRT Documentation
But I don’t know how to access to the engine object.
Do you have an idea?

Thanks
Paul Griffoul

Hi,
This looks like a Jetson issue. We recommend you to raise it to the respective platform from the below link

Thanks!

Hi,

Since you are using TF-TRT framework, please refere to TF-TRT document instead:
For example, you can serialize the model with the sample below:

In general, if model can support dynamic input.
You can use the serialized file even the input image has different size.

Thanks.