SSD Mobilenet trained on custom data on deepstream

How to train our own model for SSD Mobilenet v2 to be deployed on deepstream? Currently TLT supports training only for Resnet on SSD. I also tried converting the tensorflow model to uff, unfortunately it throws an error as in https://devtalk.nvidia.com/default/topic/1049399/tensorflow-model-to-tensorrf-uff-model-convert-error/

Hi,

Do you want to train a SSD Mobilenet v2 or deploy a SSD Mobilenet v2 on the DeepstreamSDK?

For training, you should be able to retrain it with TensorFlow directly if TLT haven’t supported yet.
For deploying, please check this comment for the detail instructions:

https://devtalk.nvidia.com/default/topic/1066088/deepstream-sdk/how-to-use-ssd_mobilenet_v2/post/5399649/#5399649

Thanks.

Hi AastaLLL. Yes we did train with Tensorflow and evaluated the model also. The problem is with the conversion to uff part. TRT doesnt support some layers from Tensorflow and we did come across discussions and answers in the forum suggesting C++ plugins for non-supported layers. This is the error that we are facing currently ( https://devtalk.nvidia.com/default/topic/1049399/tensorflow-model-to-tensorrf-uff-model-convert-error/). Is there any other way to use frozen_graph.pb file on the deepstream app directly? Or any other weights formats for conversion? Willing to send you the weights for reproducing the issue if needed.

Hi,

Are you using the SSD Mobilenet v2?

For the SSD Mobilenet v2,you only need the FlattenConcat plugin, which is already available in the Deepstream/TensorRT sample.
/opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_SSD/nvdsinfer_custom_impl_ssd/nvdsiplugin_ssd.cpp

Thanks.