Deepstream 4.0 with GStreamer for RTP Video

I have cameras which make RTP stream(UDP,H264 encoded) and want to use deepstream to implement yolov3 model on these camera videos. I have the gstreamer command such as

gst-launch-1.0 -vvv udpsrc port=XXXX caps=“application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)101” ! rtph264depay ! video/x-h264, stream-format=byte-stream ! avdec_h264 ! xvimagesink sync=false

I noticed that there exists examples for RTSP by changing config file. My question is it possible to use deepstream for RTP video? If it is possible how should I change the config file?

Hi,
Please upgrade to DS4.0.1

We have a sample in

deepstream_sdk_v4.0.1_jetson\sources\objectDetector_Yolo

A reference post of configuring to rtsp source
https://devtalk.nvidia.com/default/topic/1058086/deepstream-sdk/how-to-run-rtp-camera-in-deepstream-on-nano/post/5366807/#5366807

Hi, thank you for your answer but I think the reference post is about RTSP stream not RTP so configuration file for me would be different. I am not sure how to open RTP stream by making changes on configuration file. I think I can not write as RTSP:\… because it is an RTP stream. By the way I used the demo in DS4.0.1 Yolo file with usb camera and it worked fine. Next step for me is to open it with RTP stream if it is possible. I forgot to mention that I am taking the stream via ethernet.

I think you’re pretty close, actually. If your example works, then keep using udpsrc instead of rtspsrc. Probably the rest can stay the same as deepstream does (except for stuff like RTCP callbacks, etc.).

Oh thanks for fast reply but I could not get where to use udpsrc. I used the examples in Deepstream 4.0.1 and used the demos for CameraV4L2 which is the webcam(usb camera). Now instead of this I need to run yolo on the RTP stream which I can open with gstream with the command at the #1. Which config files should I change? I am a little bit confused.

Try a pipeline like this:

gst-launch-1.0 udpsrc ... ! rtph264depay ! h264parse ! nvv4l2decoder ! mux.sink_0 \
  nvstreammux width=1280 height=720 batch-size=1 batched-push-timeout=4000000 name=mux \
  ! nvinfer  config-file-path=.../objectDetector_Yolo/config_infer_primary_yoloV3.txt \
  ! nvtracker tracker-width=600 tracker-height=300 ll-lib-file=.../lib/deepstream/libnvds_mot_klt.so \
  ! fakesink

That’s the start of your pipeline, anyhow. Try it (after replacing the …'s) and see if it at least runs.

After that, you can add more nvinfers (known as SGIE’s) to do classification, etc.

BTW, I’m not sure if the h264parse is necessary.

Hi,
The implementation of deepstream-app is based on uridecodebin. It shall be good for udpsrc. You can configure

uri=udp://xx.xx.xx.xx

Reference:
http://gstreamer-devel.966125.n4.nabble.com/uridecodebin-Query-td3319279.html

For example for uri="udp://239.0.0.1:2301", uridecodebin will select udpsrc.

Thanks for your replies, after commments I used the pipeline at #6 and then faced with an error which is due to lack of caps
then I used the following pipeline(can be beneficial for people who have the same problem):

gst-launch-1.0 udpsrc port=6670 caps=“application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)101” ! rtph264depay ! queue2 ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=640javascript:void(); height=480 ! nvinfer config-file-path=config_infer_primary_yoloV2.txt batch-size=1 unique-id=1 ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink sync=false

I used queue2 to get rid of the packet drop, without queue there exist sudden loss on video. But I observed that delay increased and can not figure out what is the problem. It was about 240ms without using nvv4l2decoder, when I used nvv4l2decoder and nveglglessink it jumped 2s. Do you have any idea to solve this delay problem? In addition there still exists sudden drops in the video but less, so if you have any further recommendations I will be glad.

There’s another element you could experiment with - rtpjitterbuffer. It has a low likelihood of solving your problem, but you could at least try different values of its ‘mode’ property. It also has some other properties you might want to tune. Put it before rtph264depay.

mode                : Control the buffering algorithm in use
                        flags: readable, writable
                        Enum "RTPJitterBufferMode" Default: 1, "slave"
                           (0): none             - Only use RTP timestamps
                           (1): slave            - Slave receiver to sender clock
                           (2): buffer           - Do low/high watermark buffering
                           (4): synced           - Synchronized sender and receiver clocks

You could also try setting the ‘sync’ and ‘max-lateness’ properties of nveglglessink to false and 0, respectively.