How to deploy SSD Mobilenet V2 for inference in jetson nano

I have both custom models mobilenet V2 from tensorflow 1.14.0 and 2.5.0, tensorflow 1 is already at frozen_graph.pb and the tensorflow 2 is already exported. while using onnx for both, opset 11 works for both but opset 9 raise error, I guess it has a compatibility problem with mobilnet v2, anyway I don’t know how to convert the tensorflow 1 model as trt and use it for live object detection

Hi,

We have an example for converting a TensorFlow-based SSD model.
Could you follow the steps first:

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.