Problem handling stream EOS with nvof plugin

• Hardware Platform (Jetson / GPU)
GPU (A5000)
• DeepStream Version
Deepstream 6.2
• JetPack Version (valid for Jetson only)
• TensorRT Version
TensorRT 8.5.2
• NVIDIA GPU Driver Version (valid for GPU only)
Version 525.85
• Issue Type( questions, new requirements, bugs)
Question / Bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
When running multiple video files as input, the nvof plugin crashes as soon as any input video reaches EOS. The same behavior is observed when the input sources are set to loop indefinitely.

We have encountered an issue with the nvof plugin when we attempted to run multiple videos of differing lengths with nvof enabled. The plugin crashes as soon as any input video reaches EOS. The error message that appears is as follows :

** INFO: <bus_callback:226>: Pipeline running

gst_ds_optical_flow_set_caps: Creating OpticalFlow Context for Source = 0
libnvds_opticalflow_dgpu: Setting GPU_ID = 0
gst_ds_optical_flow_set_caps: Creating OpticalFlow Context for Source = 1
libnvds_opticalflow_dgpu: Setting GPU_ID = 0
gst_ds_optical_flow_set_caps: Creating OpticalFlow Context for Source = 2
libnvds_opticalflow_dgpu: Setting GPU_ID = 0
gst_ds_optical_flow_set_caps: Creating OpticalFlow Context for Source = 3
libnvds_opticalflow_dgpu: Setting GPU_ID = 0
Processing frame number = 0     batch_id = 0     src_id = 1
...
Processing frame number = 250   batch_id = 0     src_id = 1
**PERF:  4.90 (5.03)    4.90 (5.03)     4.90 (5.05)     4.90 (5.05)
**PERF:  5.80 (5.03)    5.80 (5.03)     5.80 (5.05)     5.80 (5.05)
**PERF:  4.59 (5.03)    4.59 (5.03)     4.59 (5.05)     4.59 (5.05)
**PERF:  6.08 (5.05)    6.08 (5.05)     6.08 (5.07)     6.08 (5.07)
**PERF:  5.00 (5.05)    5.00 (5.05)     5.00 (5.07)     5.00 (5.07)
**PERF:  4.87 (5.05)    4.87 (5.05)     4.87 (5.07)     4.87 (5.07)
**PERF:  4.43 (5.05)    4.43 (5.05)     4.43 (5.06)     4.43 (5.06)
**PERF:  6.25 (5.05)    6.25 (5.05)     6.25 (5.06)     6.25 (5.06)
**PERF:  4.66 (5.04)    4.66 (5.04)     4.66 (5.06)     4.66 (5.06)
nvstreammux: Successfully handled EOS for source_id=3
nvstreammux: Successfully handled EOS for source_id=3
nvstreammux: Successfully handled EOS for source_id=3
Processing frame number = 300   batch_id = 0     src_id = 2

**PERF:  FPS 0 (Avg)    FPS 1 (Avg)     FPS 2 (Avg)     FPS 3 (Avg)
**PERF:  5.29 (5.06)    5.29 (5.06)     5.29 (5.08)     5.29 (5.08)
Processing frame number = 300   batch_id = 0     src_id = 1
Error: A batch of multiple frames received from the same source.
Set sync-inputs property of streammux to TRUE.
0:02:10.064676997 34233 0x55c9aa914120 ERROR            nvdsmetamux gstnvdsmetamux.cpp:952:gst_nvdsmetamux_aggregate:<infer_bin_muxer> push error
ERROR from opticalflow_queue: Internal data stream error.
Debug info: gstqueue.c(988): gst_queue_handle_sink_event (): /GstPipeline:deepstream-skt-pipeline/GstBin:parallel_infer_bin/GstBin:opticalflow_bin/GstQueue:opticalflow_queue:
streaming stopped, reason error (-5)
Quitting

Source 3 is the shortest input video that contains ~300 frames in total, and as soon as it reaches EOS, nvof crashes with the message ERROR from opticalflow_queue: Internal data stream error., followed by the whole pipeline stopping due to this.

The question we have is whether there is a way to gracefully handle these cases, so that the system continues to function until all the videos reach EOS or if input videos are set to repeat indefinitely over a loop.

A follow-up question we have is whether this issue could appear when we use input streams over network. For example, if there is a certain delay in stream due to network turbulence, should we expect nvof to behave like this?

Thank you

Can you update to DeepStream 6.3?
I’ve run deepstream-nvof-app sample with two videos which have different durations, it works well. And the NVIDIA-AI-IOT/deepstream_parallel_inference_app: A project demonstrating how to use nvmetamux to run multiple models in parallel. (github.com) can also work well with videos which have different durations.

Thank you! We updated to Deepstream 6.3 and the EOS issue is no longer there.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.