How to deploy ssdlite_mobilenet_v2 model on the Jetson nano with deepstream6.0?

• Hardware Platform (Jetson / GPU) :Jetson Nano
• DeepStream Version :6.0.1
• JetPack Version (valid for Jetson only) :4.6.1
• TensorRT Version : TensorRT 8.2.1(default)
• Issue Type( questions, new requirements, bugs) :questions

Thanks for your reading.

Following the guidance of link below, I have successfully deployed ssd_mobilenet_v2 using deepstream6.0.
How to use ssd_mobilenet_v2 - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums.

But I ran into problems when trying to deploy ssdlite_mobilenet_v2 one step further. I found the following link to guide my deployment process, and got the same error.
Can SSDLite Mobilenet V2 work with Jetson Nano+Deepstream? - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

The answer to the question in the above topic was very vague, which thought the error was caused by a mismatch in the number of categories. So, he gives a suggestion to modify the number of categories in the config.py file.

But my ssdlite_mobilenet_v2 model was downloaded from the official website and has the same number of categories as ssd_mobilenet_v2, so I don’t know how to proceed with my next work.

Thank you very much for your patience, and I am looking forward to your reply.

Hi,

Have you tried to convert the model into a TensorRT engine?
Since the model is based on TFv1, please convert it throught .pb → .uff → .trt, a similar way as ssd_mobilenet_v2.

If the conversion fails, you can also try to deploy the model with the Triton server with the TensorFlow backend.
Some examples can be found in the below folder:

/opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app-triton

Thanks.

1 Like

thanks for your reply.
I will try to use triton to solve this problem.
But I noticed that the link you gave only contains some config files.
Do you have a simple example of deploying the triton plugin in deepstream using C++?
Just like the files in the below folder:

deepstream-6.0\sources\apps\sample_apps

Hi,

The configure can be deployed with the deepstream-app binary, which is a C++ sample.
You can find the source code in the below folder:

/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-app

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.