Live streaming with inference using deepstream sdk via RTMP

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU) - Nano
• DeepStream Version - 6.0
• JetPack Version (valid for Jetson only) - 4.6.2

I’m using usb camera to stream live object detection inference using rtmp sink from deepstream pipeline. Below is the pipeline used:

gst-launch-1.0 v4l2src device=/dev/video0 ! videoconvert ! “video/x-raw,format=(string)UYVY” ! nvvideoconvert ! “video/x-raw(memory:NVMM),width=1280,height=720,framerate=30/1” ! nvvideoconvert ! “video/x-raw,format=(string)NV12,width=1280,height=720,framerate=30/1” ! nvvideoconvert ! mux.sink_0 nvstreammux live-source=1 name=mux batch-size=1 width=1280 height=720 ! nvinfer config-file-path=dstest1_pgie_config.txt batch-size=1 ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! queue ! flvmux name=muxer alsasrc device=hw:1 ! audioresample ! “audio/x-raw,rate=48000” ! queue ! voaacenc bitrate=32000 ! aacparse ! queue ! muxer. muxer. ! rtmpsink location= rtmp://a.rtmp.youtube.com/live2/xxxxxxxx sync=false -v

It is streaming only the video NOT the inference output with bounding box and its classes.

When i tried to display locally it works fine with the below command :

gst-launch-1.0 v4l2src device=/dev/video0 ! videoconvert ! “video/x-raw,format=(string)UYVY” ! nvvidconv ! “video/x-raw(memory:NVMM),width=1280,height=720,framerate=30/1” ! nvvidconv ! “video/x-raw,format=(string)NV12,width=1280,height=720,framerate=30/1” ! nvvidconv ! mux.sink_0 nvstreammux live-source=1 name=mux batch-size=1 width=1280 height=720 ! nvinfer config-file-path=dstest1_pgie_config.txt ! nvvidconv ! nvv4l2h264enc ! h264parse ! queue ! flvmux name=muxer alsasrc device=hw:1 ! audioresample ! “audio/x-raw,rate=48000” ! queue ! voaacenc bitrate=32000 ! aacparse ! queue ! muxer. muxer. ! nveglglesink -e

How to stream the object detection output via rtmp sink?

Any help would be appreciated.

Thanks.

Your pipeline looks fine. It should work.

I don’t think nveglglessink can be connected to flvmux src pad. This is a wrong pipeline.

1 Like

gst-launch-1.0 v4l2src device=/dev/video0 ! videoconvert ! “video/x-raw,format=(string)UYVY” ! nvvideoconvert ! “video/x-raw(memory:NVMM),width=1280,height=720,framerate=30/1” ! nvvideoconvert ! “video/x-raw,format=(string)NV12,width=1280,height=720,framerate=30/1” ! nvvideoconvert ! mux.sink_0 nvstreammux live-source=1 name=mux batch-size=1 width=1280 height=720 ! nvinfer config-file-path=dstest1_pgie_config.txt batch-size=1 ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! queue ! flvmux name=muxer alsasrc device=hw:1 ! audioresample ! “audio/x-raw,rate=48000” ! queue ! voaacenc bitrate=32000 ! aacparse ! queue ! muxer. muxer. ! rtmpsink location= rtmp://a.rtmp.youtube.com/live2/xxxxxxxx sync=false -v

With the above command - it is not streaming the inference output - instead it is streaming only the camera input.

Any suggestions?

Please dump pipeline graph to check. DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

Attached the output image of the pipeline graph here.
Let me know anything is missing.

Please use nvdsosd to draw the bbox and print text. Gst-nvdsosd — DeepStream 6.3 Release documentation

gst-launch-1.0 v4l2src device=/dev/video0 ! videoconvert ! “video/x-raw,format=(string)UYVY” ! nvvideoconvert ! “video/x-raw(memory:NVMM),width=1280,height=720,framerate=30/1” ! nvvideoconvert ! “video/x-raw,format=(string)NV12,width=1280,height=720,framerate=30/1” ! nvvideoconvert ! mux.sink_0 nvstreammux live-source=1 name=mux batch-size=1 width=1280 height=720 ! nvinfer config-file-path=dstest1_pgie_config.txt batch-size=1 ! nvvideoconvert ! nvdsosd ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! queue ! flvmux name=muxer alsasrc device=hw:1 ! audioresample ! “audio/x-raw,rate=48000” ! queue ! voaacenc bitrate=32000 ! aacparse ! queue ! muxer. muxer. ! rtmpsink location= rtmp://a.rtmp.youtube.com/live2/xxxxxxxx sync=false -v

Thanks for you suggestion.
It works now - However there is a latency of 30 sec to 1 minute in the live streaming.

Any recommendations to reduce the latency in this pipeline?

Please create new topic for the latency issue.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.