UDPSrc causes sinks to drop frames

Hardware Platform: Jetson Xavier AGX & Xavier NX
DeepStream Version: 6.0.1
JetPack Version: 4.6.3
TensorRT Version: 8.2.1.9-1+cuda10.2
Issue Type" question

This is a follow up question to: NVEGLGLESSINK is slow on xavier nx

The network that I’m running (a custom version of yolov5) doesn’t run real-time (I don’t need it to), in a pipeline that is based on the pipeline in the deepstream-test3 app. When I play a video file (filesrc), the NX consistently is able to process ~7-9 fps and AGX processes ~18-19 fps which is sufficient for what I need (both measured when interval = 0). When I switch from a filesrc to a udpsrc, processing starts to be very inconsistent, varying from 7-9 fps down to 2-3 fps and back on the NX and varying from 18-19 fps on the AGX down to 4-5fps, even when setting interval to something that should allow inference to keep up.

My ultimate goal is to get the pipeline running so that inference is being performed as fast as available processing will allow (not necessarily on every frame, just whenever one frame is done, start inference on the most recent frame received. I would like the rate at which frames are processed to be relatively consistent and ideally the display to be smooth, displaying every frame, regardless of whether or not inference was performed on it, without dropping buffers.

I’ve discovered if I set interval to a really high number (like 100,000) so as to effectively disable inference, I get the behavior that for a filesrc, the display sink works as it should, but for a udpsrc, the display sink drops frames and the display is choppy, so I think there’s something in my pipeline in general that isn’t set up correctly for a udp source. Both the video source and udp source are 1280 x 720, 25 fps mpeg ts.

Edit: I just tested the deepstream-test3 app with it’s default configuration and I get the exact same behavior: a filesrc plays properly and the display sink doesn’t drop any frames (I think it’s using the default resnet10 which appears to be able to keep up with real time), however on a udpsrc I get the same warning that I get in my pipeline, this time from element nvvideo-renderer: “A lot of buffers are being dropped” and the video display is very choppy as above.

Edit: if I use the command line gst with playbin, i.e. gst-launch-1.0 playbin uri=udp://... then the udp source works as expected, although obviously no inference takes place. No frames dropped on the display sink.

I’ve uploaded the pipelines for all three cases - my pipeline from filesrc, udpsrc and from playbin. I think I have two issues to solve 1) get udpsrc working in the pipeline correctly so even when inference is “turned off” by setting the interval to a really high number, frames on the display sink aren’t dropped, and 2) how to get the pipeline to cleanly handle inference time that is slower that frame rate.



Are you sure the ethernet connection can provide the source stream as soon as possible? As we known, the UDP protocol do not guarantee transferring quality, the protocol stack needs extra effort to do the packets waiting and arrangement jobs. You need to test the udpsrc receiving performance first.

RFC 768 - User Datagram Protocol (ietf.org)
RFC 8085 - UDP Usage Guidelines (ietf.org)

I don’t think it’s the UDP stream itself since if I do gst-launch-1.0 playbin uri=... the pipeline that gstreamer automagically assembles plays the udp source just fine. I did notice it’s using the nvoverlaysink instead of the nveglgles sink. Not sure what the difference between the two sinks are.

I think I have more or less fixed this by adding a videorate element just after the decoder and turning the framerate down. If I had to guess what was happening, it would be that setting the interval didn’t extend the time an individual frame was permitted to process. i.e. the sink was still expecting frames at the full framerate and on frames where inference was done, that time was still being exceeded and not providing data to downstream elements on time. Or in other words, a frame that was meant to be skipped was still arriving late because the frame that was being processed was taking too long. Simply telling nvinfer to skip frames didn’t allow nvinfer to send out frames at the correct framerate. I assume a filesrc was able to handle this just fine, however a udpsource since it’s a “live” source couldn’t. By inserting the videorate element and setting the framerate that way, it actually increased the time a frame was allowed to take processing, keeping the stream happier. Just my guess.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.