Run TF2 object detection model after conversion to TensorRT on Jetson NANO

Hi, this is a follow-up to my questions in this topic.

I have a TF2 object detection model, fine-tuned to recognize a custom object, and I need to deploy it on Jetson NANO.
After copying the model to the NANO, I tried to convert it to TensorRT by using:

from tensorflow.python.compiler.tensorrt import trt_convert as trt

def main():
	input_saved_model_dir = 'my_custom_model_path/saved_model'
	output_saved_model_dir = 'trt_models/converted_model'
	converter = trt.TrtGraphConverterV2(input_saved_model_dir=input_saved_model_dir)

if __name__ == '__main__':

This script produced another saved model in the selected output path. I assume this can be run after installing the TF Object Detection API on the NANO? I’m trying to do that, but the process has been going on for the past 3 days, it seems to be struggling in solving dependencies.

Or, is there any other way to run it natively after the conversion, for example by using detectnet?



Please noted that TensorRT model is NOT portable.
You will need to use the TensorFlow model and do the conversion directly on the Jetson Nano.

A alternative is to convert the model into ONNX via tf2onnx.
Then you can deploy the ONNX file with our TensorRT executable:

/usr/src/tensorrt/bin/trtexec --onnx=[file]


Hi, thanks for the reply.

I tried using the tf2onnx repository, but when I try to convert my model I was getting the error:

ValueError: StridedSlice: attribute new_axis_mask not supported

As I mentioned, I copied the model to the Jetson Nano and ran the script I posted on the Nano, the other saved model was produced there.
Can I somehow use it now?


The usage is similar to the TensorFlow on the desktop environment.
Due to some layers are not supported by ONNX, please inference it with the TensorFlow interface.

Below is some example for your reference:


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.