Deepstream-parallel-infer-app:when one of the stream is disconnected, the stored video and image are inconsistent

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
jetson agx orin
• DeepStream Version
6.2
• JetPack Version (valid for Jetson only)
5.1
• TensorRT Version
8.5
• NVIDIA GPU Driver Version (valid for GPU only)
11.4
• Issue Type( questions, new requirements, bugs)
questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
This problem occurs when doing event detection. My picture and video saving functions are triggered by events. When an event occurs, I need to complete the picture and video saving function. The event detection is placed after detecting the src pad. The following is my pipeline and code for saving images and videos.

application:
enable-perf-measurement: 1
perf-measurement-interval-sec: 5
##gie-kitti-output-dir=streamscl

tiled-display:
enable: 0
rows: 2
columns: 2
width: 1920
height: 1080
gpu-id: 0
nvbuf-memory-type: 0

source:
#csv-file-path: sources_4.csv
csv-file-path: sources_4_different_source_rtsp.csv

sink0:
enable: 0
#Type - 1=FakeSink 2=EglSink 3=File 7=nv3dsink (Jetson only)
type: 2
source-id: 0
gpu-id: 0
nvbuf-memory-type: 0

osd:
enable: 0
gpu-id: 0
border-width: 1
text-size: 15
#value changed
text-color: 1;1;1;1
text-bg-color: 0.3;0.3;0.3;1
font: Serif
show-clock: 0
clock-x-offset: 800
clock-y-offset: 820
clock-text-size: 12
clock-color: 1;0;0;0
nvbuf-memory-type: 0

streammux:
gpu-id: 0
##Boolean property to inform muxer that sources are live
live-source: 1
buffer-pool-size: 5
batch-size: 16
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed

batched-push-timeout: 400000

batched-push-timeout: 120000

Set muxer output width and height

width: 1920
height: 1080
#enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding: 0
nvbuf-memory-type: 0

primary-gie0:
enable: 1
#(0): nvinfer; (1): nvinferserver
plugin-type: 0
gpu-id: 0
#input-tensor-meta: 1
batch-size: 2
interval: 5
gie-unique-id: 1
nvbuf-memory-type: 0
config-file: /opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream_parallel_inference_app-master/tritonclient/sample/configs/models_configs/config_infer_primary_renjifei_yoloV5.txt

branch0:

pgie’s id

pgie-id: 1
src-ids: 0;1;2;3;4;5;6;7;8;9;10;11;12;13;14;15

tracker0:
enable: 1
cfg-file-path: tracker0.yml

primary-gie1:
enable: 1
#(0): nvinfer; (1): nvinferserver
plugin-type: 0
gpu-id: 0
#input-tensor-meta: 1
batch-size: 1
interval: 10
gie-unique-id: 2
nvbuf-memory-type: 0
config-file: /opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream_parallel_inference_app-master/tritonclient/sample/configs/models_configs/config_infer_primary_road_covered_resnet18.txt

branch1:
pgie-id: 2

select sources by sourceid

src-ids: 0;1;2;3;4;5;6;7;8;9;10;11;12;13;14;15

tracker1:
enable: 0
cfg-file-path: tracker0.yml

primary-gie2:
enable: 1
#(0): nvinfer; (1): nvinferserver
plugin-type: 0
gpu-id: 0
#input-tensor-meta: 1
batch-size: 1
interval: 10
gie-unique-id: 3
nvbuf-memory-type: 0
config-file: /opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream_parallel_inference_app-master/tritonclient/sample/configs/models_configs/config_infer_primary_visibility_resnet18.txt

branch2:

pgie’s id

pgie-id: 3

select sources by sourceid

src-ids: 0;1;2;3;4;5;6;7;8;9;10;11;12;13;14;15

tracker2:
enable: 0
cfg-file-path: tracker0.yml

primary-gie3:
enable: 1
#(0): nvinfer; (1): nvinferserver
plugin-type: 0
gpu-id: 0
#input-tensor-meta: 1
batch-size: 1
interval: 10
gie-unique-id: 4
nvbuf-memory-type: 0
config-file: /opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream_parallel_inference_app-master/tritonclient/sample/configs/models_configs/config_infer_primary_fire_yoloV5.txt

branch3:

pgie’s id

pgie-id: 4

select sources by sourceid

src-ids: 0;1;2;3;4;5;6;7;8;9;10;11;12;13;14;15

tracker3:
enable: 0
cfg-file-path: tracker0.yml

meta-mux:
enable: 1
config-file: …/metamux/config_metamux0.txt

tests:
file-loop: 0

type or paste code here

guint gpu_id = 0;
    NvDsObjEncCtxHandle obj_ctx_handle = nvds_obj_enc_create_context(gpu_id);
    if (!obj_ctx_handle) {
        g_print ("Unable to create context\n");
        // return -1;
    }
    
    GstMapInfo inmap = GST_MAP_INFO_INIT;
    if (!gst_buffer_map (buf, &inmap, GST_MAP_READ)) {
        GST_ERROR ("input buffer mapinfo failed");
        // return GST_PAD_PROBE_DROP;
    }
    // 获取ip_surf
    NvBufSurface *ip_surf = (NvBufSurface *) inmap.data;
    // NvBufSurface idx_surface;
    gst_buffer_unmap (buf, &inmap);  // 16
   
    NvDsObjEncUsrArgs frameData = { 0 };
    /* Preset */
    frameData.isFrame = TRUE;
    /* To be set by user */
    frameData.saveImg = TRUE;  //save_img
    frameData.attachUsrMeta = FALSE;
    g_snprintf(frameData.fileNameImg, image_path.length()+1, "%s", image_path.c_str());
    /* Quality */
    frameData.quality = 80;
    
    /* Main Function Call */
    nvds_obj_enc_process (obj_ctx_handle, &frameData, ip_surf, NULL, frame_meta);
    nvds_obj_enc_finish (obj_ctx_handle);

    
    nvds_obj_enc_destroy_context (obj_ctx_handle);
// save videos
NvDsSrcBin *src_bin = &appCtx->pipeline.multi_src_bin.sub_bins[source_id];
if (!src_bin->recordCtx && src_bin->reconfiguring)  return;
NvDsSRContext *ctx = (NvDsSRContext *) src_bin->recordCtx;
if (!ctx->recordOn) { 
    NvDsSRStart (ctx, &sessId, startTime, duration, NULL);
} 

Has anyone encountered similar problems?
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Could you just attache the config file instead of the content? And what do you mean event detection? Could you attach the inconsistent video or image?

source4_1080p_dec_classifier_parallel_infer.txt (6.7 KB)
sources_4_different_source_rtsp.txt (1.4 KB)
Thanks Reply!
The source4_1080p_dec_classifier_parallel_infer.txt file here originally had the suffix yml. Because the yml file could not be uploaded, I changed it to a txt file. sources_4_different_source_rtsp.txt is a csv file, the suffix here is changed to txt
The event detection mentioned here is area intrusion. When pedestrians or vehicles enter the area, an alarm will be issued, and pictures and videos will be stored at the same time.

image
image
The top is an image, the bottom is a video

OK. What do you mean by inconsistent? Does that mean it saved the wrong sources of the video, like it should save the sample from source1, but it saved the sample from source2?

Could you attach all the code you modified?

The saved video is correct, but the image saved from the buffer is wrong. The saved image is an image from another stream.
I commented out the video storage function at 10s intervals from lines 1070 to 1078 in deepstream_source_bin.c. Secondly, the function of starting to save videos in my code is reflected in the following code.

int source_id = frame_meta->source_id;
NvDsSrcBin *src_bin = &appCtx->pipeline.multi_src_bin.sub_bins[source_id];
if (!src_bin->recordCtx && src_bin->reconfiguring)  return;
NvDsSRContext *ctx = (NvDsSRContext *) src_bin->recordCtx;
gchar *file_name;

NvDsSRSessionId sessId = 0;
guint startTime = 7;
guint duration = 8;
if (src_bin->config->smart_rec_duration >= 0)
    duration = src_bin->config->smart_rec_duration;
if (src_bin->config->smart_rec_start_time >= 0)
    startTime = src_bin->config->smart_rec_start_time;
if (!ctx->recordOn) { 
    NvDsSRStart (ctx, &sessId, startTime, duration, NULL);
} 

Image saving function

void getImg(std::string image_path, NvDsFrameMeta *frame_meta, GstBuffer *buf){
    guint gpu_id = 0;
    NvDsObjEncCtxHandle obj_ctx_handle = nvds_obj_enc_create_context(gpu_id);
    if (!obj_ctx_handle) {
        g_print ("Unable to create context\n");
    }
    GstMapInfo inmap = GST_MAP_INFO_INIT;
    if (!gst_buffer_map (buf, &inmap, GST_MAP_READ)) {
        GST_ERROR ("input buffer mapinfo failed");
    }
    NvBufSurface *ip_surf = (NvBufSurface *) inmap.data;
    gst_buffer_unmap (buf, &inmap);  
   

    NvDsObjEncUsrArgs frameData = { 0 };
    /* Preset */
    frameData.isFrame = TRUE;
    /* To be set by user */
    frameData.saveImg = TRUE;  //save_img
    frameData.attachUsrMeta = FALSE;
    g_snprintf(frameData.fileNameImg, image_path.length()+1, "%s", image_path.c_str());
    /* Quality */
    frameData.quality = 80;
    /* Main Function Call */
    nvds_obj_enc_process (obj_ctx_handle, &frameData, ip_surf, NULL, frame_meta);
    nvds_obj_enc_finish (obj_ctx_handle);
    /* Destroy context for Object Encoding */
    nvds_obj_enc_destroy_context (obj_ctx_handle);  
}

Where did you add the code? Since the image in the NvBufSurface is batched, you need to find the corresponding picture in the batch to save.

static GstPadProbeReturn
analytics_done_buf_prob_renjifei(GstPad *pad, GstPadProbeInfo *info,
                          gpointer u_data){
  // NvDsInstanceBin *bin = (NvDsInstanceBin *) u_data;
  // guint index = bin->index;
  AppCtx *appCtx = (AppCtx *) u_data;
  GstBuffer *buf = (GstBuffer *) info->data;
  NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);
  if (!batch_meta) {
    NVGSTDS_WARN_MSG_V ("Batch meta not found for buffer %p", buf);
    return GST_PAD_PROBE_OK;
  }
  
  for (NvDsMetaList * l_frame = batch_meta->frame_meta_list; l_frame != NULL; l_frame = l_frame->next){
    NvDsFrameMeta *frame_meta = (NvDsFrameMeta *)l_frame->data;
    event_detector->renjifei_detect(frame_meta, buf, appCtx);
  }
  return GST_PAD_PROBE_OK;                     
}

The above probe function is added after the src pad of tracker elements. Among them, getImg function is called in renjifei_detect function. The purpose is to determine whether to save the image based on whether an event occurs in the current frame.
Is the image taken from NvBufSurface based on the batch_id of frame_meta? Because the batch_id I obtained in getImg is wrong, the image obtained is wrong. How can I get the real batch_id of frame_meta?

The images and videos that are continuously streaming are consistent, but inconsistencies will occur as long as the streaming is interrupted.

OK. What exactly does the “interrupted” refer to here?
Also could you try to set the macro below to see if it works? Thanks

export USE_NEW_NVSTREAMMUX=yes

The “interrupted” here refers to the disconnection of the video stream. Mainly to simulate actual flow interruption scenarios.

OK. So have you tried setting the environment variable export USE_NEW_NVSTREAMMUX=yes ?


I set the environment variable export USE_NEW_NVSTREAMMUX=yes, but it will cause the program to block. The source 0 push stream here is connected normally.

It’s werid. I run our demo code, it works well.

If you want to tell which video source it comes from, you need to use source_id instead of batch_id.

I want to take out the frame image of the video stream from the buffer and save it as an image in JPG format.

Is it normal to save videos and pictures after disconnecting and reconnecting?

I have not tried your code. Could you attach a whole diff of your code so that I can try on my side?

deepstream-parallel-infer-app-test.zip (64.0 MB)
I wrote the picture and video saving functions together. You can see the postprocess_done_buf_pro_renjifei probe function in deepstream_parallel_infer_app.cpp.

So you are saving the video and image in the analytics_done_buf_prob_renjifei probe function? I didn’t find where you linked the recordbin plugin. Could you describe the feature you want to implement? We wouldn’t normally use NvDsSRStart in a probe function.

Yes, saving pictures and videos is implemented in the analytics_done_buf_prob_renjifei probe function. Here I just omit the process of event judgment and instead trigger saving pictures and videos every 10,000 frames.
smart-record is configured in the source group. The original configured parameters can only store videos regularly (save every 10 seconds) and cannot be controlled manually. I put the video storage in the probe function to ensure conditional video storage (such as If someone enters the surveillance area, the video will be saved).
You said that NvDsSRStart is usually not used in the probe function. Is there a way to save videos and images when there is an alarm?

Are there any implicit problems when storing videos and images in the probe function?