Steps to use my custom segmentation model in DeepStream

• Hardware Platform (Jetson / GPU) - Jetson Nano
• DeepStream Version - 4.0
• JetPack Version (valid for Jetson only) - 4.3
• TensorRT Version - 6.0.1.10
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) - Question

Hello, I have a .uff model trained on Unet for lane detection. I currently use OpenCV for inference and I would like to start using DeepStream. What do I need to do to run this segmentation model in DeepStream? Are there any courses or tutorials for me to use my segmentation model on DeepStream?
I was previously trying to train a lane detection model using MaskRCNN implementation in TLT, but I couldn’t get a good performance so I tried training with Unet externally.
Any help would be appreciated. Thanks in advance!

I was able to run my model in DeepStream using the MaskRCNN config sample. My problem now is that the inference is not being shown in the visualization. Can anyone help me with this? The model I’m using has infer-dims=3;384;384.

I think you can start with deepstream-segmentation-test if you are using UNET model

Hello @bcao, thank you for your reply!
I did use deepstream-segmentation-test, and it worked! But this sample only outputs masks in jpeg images as input, from what I could see. I would like to use my model in a sample that receives a video stream and outputs the detections on top of this video, like the MaskRCNN config sample in samples/configs/tlt_pretrained_models/deepstream_app_source1_mrcnn.txt. And then I’d use the deepstream-app -c with that config file. Is it possible to do so?

Hello everyone. I’ve been trying to make this work nonstop. I changed the deepstream-segmentation-test to accept h264 streams as input and it worked in my dGPU Desktop. But I can’t make it to run in Jetson Nano… I think that it may be because of this line of code:

    
#ifdef PLATFORM_TEGRA
    g_object_set (G_OBJECT (decoder), "mjpeg", 1, NULL);
#endif
    

I’ve been trying to find which property name to put in the "mjpeg" place to decode h264, but I failed to make it work so far.
When I run the sample in Jetson Nano it seems like the stream gets trapped in the first frame, like the image below.

Can someone point me in the right direction?

Thanks in advance.

Can you refer deepstream-test1 pipeline, it receive h264 stream on both dgpu and jetson

Thank you very much @bcao ! I followed your advice and modified the ``deepstream-test1```to run the segmentation model with h264 input stream and it worked!

If anyone is interested in making this work I uploaded a repository in my git at this link: GitHub - fredlsousa/deepstream-test1-segmentation: Modified deepstream-test1 sample app to accept segmentation models and output the masks. There's also a CMake file for those who like it better than Makefiles.
There’s also a CMake file to compile this sample for those who like CMake better.

Cheers!

Great work.

Hi fredericolms,

Currently, I also develop my model in pytorch and convert it to Tensor RT engine for Deepstream. Could I ask you that should I flatten the probability map in the final output ? If we need to flatten this tensor, which shape is suitable for it ?

I have forward function as below, which I flatten all except batch size:

def forward(self, image_nchw):
image_batch = self.segmentation_module.preprocess(image_nchw)
prob_map = self.segmentation_module.segmentation_model(image_batch)
prob_map = torch.flatten(prob_map, start_dim=1, end_dim=3)
return prob_map

Thank for your suggestion so much.

1 Like