Deepstream 5.0 Semantic Segmentation using DeeplabsV3+ TensorRT model

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
AGX Xavier
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• Issue Type( questions, new requirements, bugs)

I would like to get some advice on how I would have to approach integrating my DeeplabV3+/MobilenetV3 FP16 TRT model into DeepStream. I saw a demo application using MaskRCNN on the TLT demo repository so I think it should be possible to integrate new models in DeepStream?
Is it enough to write some code to decode/parse the output of my semantic segmentation models or are there other steps involved that I’m unaware of.
If there is documentation or tutorials on running custom models (with different architecture than the demo applications) a link would be appreciated!

Edit: This is the model I would like to integrate:


It seem that you are using a TensorFlow based model, a related sample can be found here:


And here are some information for integrating a customized model:



Hi I am interested to run the similar deeplab model. Any luck of integrating to deepstream pipeline? would you like to share your DeeplabV3+/MobilenetV3 FP16 TRT model and deepstream configure file for me to learn by example? Thanks a lot for your help.

Hey @ynjiun, unfortunately I cannot share my model/config files but I mentioned some of the steps that I took for making my DeeplabV3+/MobilenetV3 model work in this post: Save serialized TF-TRT engine to reuse in Deepstream

@blubthefish, no problem. Thanks for sharing those steps. It’s helpful.