Processing video in "real-time"

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.3
• JetPack Version (valid for Jetson only)
• TensorRT Version not relevant
• NVIDIA GPU Driver Version (valid for GPU only) not relevant
• Issue Type( questions, new requirements, bugs) question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi there,

when using video input as a sink, it is by default processed “as fast as possible” using all gpu resources available at the moment.

Is there a way to instead configure the video sink to run in sort of real-time manner, without any external software? That is that a 20 mins video is processed one frame at a time as if it was camera input?

This is necessary to test custom logic in a specific scenarios

Thanks a lot


Set the property sync=true in your sink element. This makes the sink element wait for each buffer until the presentation timestamp (PTS) matches the pipeline’s internal timing.

@dddd713 yes, please set sync=true in the sink element.

Thank you both @miguel.taylor and @fanzh for quick reply. I just noticed that I mixed concepts in my original question. I’m talking about input video and calling it a sink. Instead I should have written that I need video SOURCE to be processed “real-time” - one frame at a time, and not as fast as possible.

I set sync=1 to each sink my app uses but this does not affect processing speed, it is roughly the same as with sync=0. There doesn’t seem to be sync option for sources as far as I can see. Would you be able to advise on this one?

what is the whole media pipeline? which sample are you testing? can you share some test data? can you use deepstream sample to reproduce this issue?

I reproduced using test5 sample. The sample is recorded with ffmpeg using 15fps but DS reports processing at close to 40fps.
My source and sink is configured like so and I disabled other sinks for simplicity.



You can add interpipes to sync the stream right after the source. For example:

370fps pipeline

gst-launch-1.0 \
uridecodebin3 uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! \
perf ! \

30fps pipeline

gst-launch-1.0 \
uridecodebin3 uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! \
interpipesink sync=true name=src \
interpipesrc is-live=true allow-renegotiation=true stream-sync=2 listen-to=src ! \
perf ! \

Thanks for sharing your solution, I truly appreciate.
However for the time being I was hoping there is a way to do so with tweaking config file only to make sure that exactly the same logic is tested.

@fanzh Did you have a chance to reproduce my issue using test5 sample?

yes I can reproduce. it is because udpsink’s propery is hardcode to 0. deepstream-test5 is opensource. you can modify create_udpsink_bin in /opt/nvidia/deepstream/deepstream/sources/apps/apps-common/src/deepstream_sink_bin.c
g_object_set (G_OBJECT (bin->sink), “host”, “”, “port”,
config->udp_port, “async”, FALSE, “sync”, 0, NULL);
g_object_set (G_OBJECT (bin->sink), “host”, “”, “port”,
config->udp_port, “async”, FALSE, “sync”, 1, NULL);
then rebuild test5.

Sorry for the late reply, Is this still an DeepStream issue to support? Thanks!

Hi there, just wanted to add that this did not help me solve my case and to raise DeepStream developers awareness that this is not the first setting that appears configurable but is in reality hardcoded ;)

For anyone else wanting to do achieve something similar, consider looking into rtsp streaming your video

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.