Deploy SSD Mobilenet V2 on Nano

I’ve trained Custom model SSD Mobilenet using tensorflow V2 Object Detection API
I successfully converted to .onnx model + .trt engine model. now how can I deploy this model on the nvidia ? how to deploy it with deep stream ?
JP 4.6

Hi,

Please noted that the .trt engine doesn’t support portability.
You will need to convert the engine file on Nano with the same TensorRT software directly.

Once you got a compatible engine file, please check the following sample to deploy it with TensorRT or Deepstream.
TensorRT: Jetson/L4T/TRT Customized Example - eLinux.org
Deeptream: How to use ssd_mobilenet_v2 - #3 by AastaLLL

Thanks.

Thanks for your reply. I want to know something then, I trained my model using my laptop with it’s config file as Tensorflow V2 Object Detection API guide. then copied it to my nano, inference done well with it’s .pb format, but while trying to convert it to onnx then from onnx to engine.trt I couldn’t due to low TensorRT Version of TensorRT 8.2.3.0, so I want to know what is the correct way to convert this .pb model format which I trained from tensorflow object detection api because I will train others and what version of jetpack do you recommend ?

Hi,

Does your TensorFlow package need TensorRT 8.2.3?

How do you install it on the Jetson Nano?
If you use our prebuilt package, it should work well with the TensorRT installed by the same JetPack.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.