Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
AGX Xavier • DeepStream Version
5.0 • JetPack Version (valid for Jetson only)
4.4 • TensorRT Version
7.1.3 • Issue Type( questions, new requirements, bugs)
question
I would like to get some advice on how I would have to approach integrating my DeeplabV3+/MobilenetV3 FP16 TRT model into DeepStream. I saw a demo application using MaskRCNN on the TLT demo repository so I think it should be possible to integrate new models in DeepStream?
Is it enough to write some code to decode/parse the output of my semantic segmentation models or are there other steps involved that I’m unaware of.
If there is documentation or tutorials on running custom models (with different architecture than the demo applications) a link would be appreciated!
Hi I am interested to run the similar deeplab model. Any luck of integrating to deepstream pipeline? would you like to share your DeeplabV3+/MobilenetV3 FP16 TRT model and deepstream configure file for me to learn by example? Thanks a lot for your help.