• Hardware Platform (Jetson / GPU) - Jetson Nano • DeepStream Version - 4.0 • JetPack Version (valid for Jetson only) - 4.3 • TensorRT Version - 6.0.1.10 • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) - Question
Hello, I have a .uff model trained on Unet for lane detection. I currently use OpenCV for inference and I would like to start using DeepStream. What do I need to do to run this segmentation model in DeepStream? Are there any courses or tutorials for me to use my segmentation model on DeepStream?
I was previously trying to train a lane detection model using MaskRCNN implementation in TLT, but I couldn’t get a good performance so I tried training with Unet externally.
Any help would be appreciated. Thanks in advance!
I was able to run my model in DeepStream using the MaskRCNN config sample. My problem now is that the inference is not being shown in the visualization. Can anyone help me with this? The model I’m using has infer-dims=3;384;384.
Hello @bcao, thank you for your reply!
I did use deepstream-segmentation-test, and it worked! But this sample only outputs masks in jpeg images as input, from what I could see. I would like to use my model in a sample that receives a video stream and outputs the detections on top of this video, like the MaskRCNN config sample in samples/configs/tlt_pretrained_models/deepstream_app_source1_mrcnn.txt. And then I’d use the deepstream-app -c with that config file. Is it possible to do so?
Hello everyone. I’ve been trying to make this work nonstop. I changed the deepstream-segmentation-test to accept h264 streams as input and it worked in my dGPU Desktop. But I can’t make it to run in Jetson Nano… I think that it may be because of this line of code:
I’ve been trying to find which property name to put in the "mjpeg" place to decode h264, but I failed to make it work so far.
When I run the sample in Jetson Nano it seems like the stream gets trapped in the first frame, like the image below.
Thank you very much @bcao ! I followed your advice and modified the ``deepstream-test1```to run the segmentation model with h264 input stream and it worked!
Currently, I also develop my model in pytorch and convert it to Tensor RT engine for Deepstream. Could I ask you that should I flatten the probability map in the final output ? If we need to flatten this tensor, which shape is suitable for it ?
I have forward function as below, which I flatten all except batch size: