How to deploy SSD Mobilnet V2 on Jetson Nano

I’ve been searching for few months and trying but 0 outcomes actually. I trained a Custom Mobilnet SSD V2 tensorflow 1.15.0 model on my pc using Object Detection API, exported it to frozen_graph.pb. I want to use it as live inference using the PI camera. but I want to convert it other than tensorflow to give me at least 20 FPS, what to do ? any guide any help ?

Hi,

To run a DNN model with a live camera, it is recommended to try our Deepstream SDK.

Please follow the instructions shared in topic 215158 to get the uff format model.
And deploy it with our Deepstream as the sample below:

/opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_SSD

We have verified the sample with the standard ssd_mobilenet_v2 model before:

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.