• Hardware Platform: Tesla T4
• DeepStream Version: 5.0
• TensorRT Version: N/A
• NVIDIA GPU Driver Version: 440.95.01
• Issue Type: Question
Hi, I hope there is someone experienced with gstreamer and DeepStream here, as I have really strange issue with my pipeline.
BACKGROUND
The idea is the following - I would like to create a pipeline that takes RTSP h264/h265 stream input, decodes it, crops several ROIs out from the stream, filter the crops on the basis of some knowledge and save JPEGs for only those crops which pass filtering stage.
The pipeline looks like that (RTSP source was cut off for schema clarity, there is one additional tee element used when pipeline is in different mode, but graph explains essential idea):
For src pad of pre_filter_queue_crop_x there is attached a probe and queues are initialised with the following props:
g_object_set(
G_OBJECT(crop_context->pre_filter_queue),
"min-threshold-time", (guint64)(runtime_args->seconds_before_filter_query*GST_SECOND),
"max-size-time", 0,
"max-size-buffers", 0,
"max-size-bytes", 0,
NULL
);
I have tried to delete min-threshold-time requirement and my issue still occurs.
The filtering probe is defined as following:
GstPadProbeReturn filtering_probe(
GstPad *pad,
GstPadProbeInfo *info,
gpointer u_data
){
CropMetadata *crop_metadata = (CropMetadata*)u_data;
gboolean success = crop_metadata->till_id == 71;
if(!success){
g_print(
"Dropping buffer cam: %u, till: %u, roi: %s, PTS: %lu.\n",
crop_metadata->camera_id,
crop_metadata->till_id,
crop_metadata->roi_type,
GST_BUFFER_PTS(buffer)
);
return GST_PAD_PROBE_DROP;
}
g_print(
"Match on C side, START %ld, END %ld, DIFF: %ld\n",
start, gst_clock_get_time(gst_system_clock_obtain()),
gst_clock_get_time(gst_system_clock_obtain()) - start
);
return GST_PAD_PROBE_OK;
}
ISSUE DESCRIPTION
When I run the pipeline I can see lots of logs like:
Dropping buffer cam: 27, till: 73, roi: scanning_area, PTS: 261047775.
Dropping buffer cam: 27, till: 72, roi: scanning_area, PTS: 261047775.
Dropping buffer cam: 27, till: 73, roi: scanning_area, PTS: 340978116.
Dropping buffer cam: 27, till: 72, roi: scanning_area, PTS: 340978116.
Dropping buffer cam: 29, till: 77, roi: scanning_area, PTS: 341025061.
Dropping buffer cam: 29, till: 75, roi: scanning_area, PTS: 341025061.
Dropping buffer cam: 29, till: 76, roi: scanning_area, PTS: 341025061.
Dropping buffer cam: 27, till: 73, roi: scanning_area, PTS: 420842949.
Dropping buffer cam: 27, till: 72, roi: scanning_area, PTS: 420842949.
Dropping buffer cam: 29, till: 77, roi: scanning_area, PTS: 420781728.
Dropping buffer cam: 29, till: 75, roi: scanning_area, PTS: 420781728.
Dropping buffer cam: 29, till: 76, roi: scanning_area, PTS: 420781728.
Dropping buffer cam: 27, till: 73, roi: scanning_area, PTS: 500623889.
basically from all tills apart from 71 which should be preserved. But I don’t see logs like:
Match on C side, START 1604911038974641642, END 1604911038974642149, DIFF: 377
while pipeline is up and running, as well as no JPEG image is saved. What I can see is growing size of pre_filter_queue_crop_x associated with till 71. Other pad is attached to its src pad:
GstPadProbeReturn mark_frame_enqueued(
GstPad *pad,
GstPadProbeInfo *info,
gpointer u_data
){
GstElement *queue = (GstElement *)gst_pad_get_parent(pad);
guint current_buffers, max_buffers, min_buffers, current_bytes, max_bytes;
guint64 current_times, max_times;
g_object_get (G_OBJECT(queue),
"current-level-buffers", ¤t_buffers,
"current-level-bytes", ¤t_bytes,
"current-level-time", ¤t_times,
"max-size-time", &max_times,
"max-size-buffers", &max_buffers,
"max-size-bytes", &max_bytes,
"min-threshold-buffers", &min_buffers,
NULL
);
g_print("[%p] mark_frame_enqueued. Current buffers: %u/%u/%u. Current bytes: %u/%u Current time: %lu/%lu.\n",
queue,
min_buffers, current_buffers, max_buffers,
current_bytes, max_bytes,
current_times, max_times
);
return GST_PAD_PROBE_OK;
}
filtered logs are like that:
[...]
[0x164e630] mark_frame_enqueued. Current buffers: 0/62/0. Current bytes: 7929552/0 Current time: 5174248535/0.
[0x164e630] mark_frame_enqueued. Current buffers: 0/63/0. Current bytes: 8057448/0 Current time: 5253252980/0.
[0x164e630] mark_frame_enqueued. Current buffers: 0/64/0. Current bytes: 8185344/0 Current time: 5333257390/0.
[0x164e630] mark_frame_enqueued. Current buffers: 0/65/0. Current bytes: 8313240/0 Current time: 5413261764/0.
[0x164e630] mark_frame_enqueued. Current buffers: 0/66/0. Current bytes: 8313240/0 Current time: 5413261764/0.
[...]
and the value of current_buffers is growing.
The moment I can see data saved on disc and logs of this type:
Match on C side, START 1604911038974641642, END 1604911038974642149, DIFF: 377
is when I press CRTL+C and send EOS to the pipeline. Then, crops are starting to be saved and I see bunch of logs in the console.
CONSIDERATIONS
I’ve executed bunch of tests to investigate the issue:
- When I always return
GST_PAD_PROBE_DROP
orGST_PAD_PROBE_OK
from filtering_probe() everything works as it should, in latter case there is on-line crops saving and almost constant queue size. - When you randomly return
GST_PAD_PROBE_DROP
/GST_PAD_PROBE_OK
for all queries, it also works, but there is noticeable delay (when I did it in controllable manner I confirmed, that branches of pipeline initially stuck until all of them receiveGST_PAD_PROBE_OK
response from filtering_probe() ).
MY ASSUMPTIONS
- As I read that queue gst element spawns execution thread, I assume that for some reason, threads that receive
GST_PAD_PROBE_OK
are not scheduled to be executed (confirmed with dumping thread ids while execution), I don’t know why, maybe my understanding of gstreamer is not good enough.
WHAT DO I ASK FOR
I would really appreciate any suggestion from people that have experience with gstreamer / DeepStream. Maybe I just do some stupid mistake I cannot spot. Thanks in advance for anyone that could help :)