Steps to use my custom segmentation model in DeepStream

• Hardware Platform (Jetson / GPU) - Jetson Nano
• DeepStream Version - 4.0
• JetPack Version (valid for Jetson only) - 4.3
• TensorRT Version - 6.0.1.10
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) - Question

Hello, I have a .uff model trained on Unet for lane detection. I currently use OpenCV for inference and I would like to start using DeepStream. What do I need to do to run this segmentation model in DeepStream? Are there any courses or tutorials for me to use my segmentation model on DeepStream?
I was previously trying to train a lane detection model using MaskRCNN implementation in TLT, but I couldn’t get a good performance so I tried training with Unet externally.
Any help would be appreciated. Thanks in advance!

I was able to run my model in DeepStream using the MaskRCNN config sample. My problem now is that the inference is not being shown in the visualization. Can anyone help me with this? The model I’m using has infer-dims=3;384;384.

I think you can start with deepstream-segmentation-test if you are using UNET model

Hello @bcao, thank you for your reply!
I did use deepstream-segmentation-test, and it worked! But this sample only outputs masks in jpeg images as input, from what I could see. I would like to use my model in a sample that receives a video stream and outputs the detections on top of this video, like the MaskRCNN config sample in samples/configs/tlt_pretrained_models/deepstream_app_source1_mrcnn.txt. And then I’d use the deepstream-app -c with that config file. Is it possible to do so?