How to load and run TRT optimized TensorFlow 2 model on Nano

I am trying to deploy the ssd-mobilenet v2 model from the Tensorflow 2 model zoo on Jetson Nano. First, I optimized the model to TRT using the code below

import tensorflow as tf
import numpy as np
from tensorflow.python.compiler.tensorrt import trt_convert as trt

input_saved_model_dir = './ssd_mobilenet_v2_320x320_coco17_tpu-8/saved_model/'
output_saved_model_dir = './models/tensorRT/'
num_runs = 1

conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS
conversion_params = conversion_params._replace(max_workspace_size_bytes=(1<<25))
conversion_params = conversion_params._replace(precision_mode="FP16") #outputs=output_names,max_batch_size=1,
# conversion_params = conversion_params._replace(maximum_cached_engiens=100)

converter = trt.TrtGraphConverterV2(input_saved_model_dir=input_saved_model_dir,conversion_params=conversion_params)

def my_input_fn():
    for _ in range(num_runs):
        inp1 = np.random.normal(size=(1, 1, 320, 320, 3)).astype(np.uint8)
        yield inp1

I successfully get a saved_model but I do not know how to load it and use it for detection
can I use jetson.inference.detectNet to load my model and do detection?
Using the same method to load TensorFlow 2 saved model, the model load successfully but loading the labels throws the error:

AttributeError: module 'tensorflow' has no attribute 'gfile'

Googling, I found that this error appears when you are using TensorFlow 1. however i installed TensorFlow 2 on my board. Can someone help me run the TensorFlow2 model on my nano.


The conversion is TF-TRT.
If you want to run it with jetson-inference, a pure TensorRT engine is required.

For the attribute error, do you specify to use TensorFlow 1 API is used in your script?
For example:

import tensorflow.compat.v1 as tf

If yes, please remove it to use TensorFlow 2 API instead.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.