Query on Nvdsbatchmeta when using the deepstream-app

I have a pipeline which is working fairly well with multiple uris used an input to my application and the nvstreammux batch-size = num uris.

Currently, I’ve been trying to integrate the DeepStream-app with my application and the goal is to use the already built pipeline from the sample deepStream-app and have my custom business logic run as a probe in the analytics callback. The issue I’m facing is, even when multiple sources are enabled in the config file and the streammux batch size is equal to the number of sources, nvdsbatchmeta->num_frames_in_batch is always 1 (I’d expect this to be the number of sources). What am I doing wrong here? If this is expected behaviour, could you explain why and is there anyway of accessing each frame in the batchmeta?

This is the sample config file:

################################################################################
# Copyright (c) 2018-2021, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=0
rows=2
columns=4
width=720
height=1280
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=2
uri=file:///test.mp4
nvbuf-memory-type=0
latency=1000
rtsp-reconnect-interval-sec=30
timeout=30000000
tcp-timeout=30000000
select-rtp-protocol=4
num-sources=1
buffer-duration=5
drop-frame-interval=0
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=2
uri=file:///test1.mp4
nvbuf-memory-type=0
latency=1000
rtsp-reconnect-interval-sec=30
num-sources=1
select-rtp-protocol=4
drop-frame-interval=2
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[source2]
enable=2
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=2
uri=file:///test2.mp4
nvbuf-memory-type=0
latency=1000
rtsp-reconnect-interval-sec=30
num-sources=1
select-rtp-protocol=4
buffer-duration=5
drop-frame-interval=2
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[source3]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=2
uri=file:///test3.mp4
nvbuf-memory-type=0
latency=1000
drop-on-latency=true
rtsp-reconnect-interval-sec=30
num-sources=1
select-rtp-protocol=4
gpu-id=0
drop-frame-interval=2
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[source4]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=2
uri=file:///test4.mp4
nvbuf-memory-type=0
latency=1000
rtsp-reconnect-interval-sec=30
num-sources=1
select-rtp-protocol=4
gpu-id=0
drop-frame-interval=2
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[source5]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=2
uri=file:///test4.mp4
nvbuf-memory-type=0
latency=1000
rtsp-reconnect-interval-sec=30
num-sources=1
select-rtp-protocol=4
gpu-id=0
drop-frame-interval=2
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[source6]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=2
uri=file:///test6.mp4
nvbuf-memory-type=0
latency=1000
rtsp-reconnect-interval-sec=30
num-sources=1
select-rtp-protocol=4
gpu-id=0
drop-frame-interval=2
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=1
sync=0
qos=0
#source-id=0
gpu-id=0
nvbuf-memory-type=3

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=3

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=1
buffer-pool-size=4
batch-size=7
num-surfaces-per-frame=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=33000
## Set muxer output width and height
width=720
height=720
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=3
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
attach-sys-ts-as-ntp=1
max-latency=200000000
# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.

[primary-gie]
enable=1
gpu-id=0
config-file=config_infer_primary_yoloV4.txt

[tracker]
enable=1
# For NvDCF and DeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=720
tracker-height=720
ll-lib-file=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=config_tracker_IOU.yml
# ll-config-file=config_tracker_NvDCF_perf.yml
# ll-config-file=config_tracker_NvDCF_accuracy.yml
# ll-config-file=config_tracker_DeepSORT.yml
gpu-id=0
enable-batch-process=1
enable-past-frame=1
display-tracking-id=1

[nvds-analytics]
enable=1
config-file=config_nvdsanalytics.txt

[tests]
file-loop=0

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.0.1
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only) 470.63
• Issue Type( questions, new requirements, bugs) QUERY
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Could you attach the code you modified?

I printed out the num_frames per batch in the sample application even when multiple sources are enabled and it prints out 1 frame per batch. This is the sample code I’m testing.

static void
all_bbox_generated (AppCtx * appCtx, GstBuffer * buf,
    NvDsBatchMeta * batch_meta, guint index)
{
  guint num_male = 0;
  guint num_female = 0;
  guint num_objects[128];

  memset (num_objects, 0, sizeof (num_objects));
  g_print("Num frames in batch: %d\n", batch_meta->num_frames_in_batch);
  for (NvDsMetaList * l_frame = batch_meta->frame_meta_list; l_frame != NULL;
      l_frame = l_frame->next) {
    NvDsFrameMeta *frame_meta = l_frame->data;
    for (NvDsMetaList * l_obj = frame_meta->obj_meta_list; l_obj != NULL;
        l_obj = l_obj->next) {
      NvDsObjectMeta *obj = (NvDsObjectMeta *) l_obj->data;
      if (obj->unique_component_id ==
          (gint) appCtx->config.primary_gie_config.unique_id) {
        if (obj->class_id >= 0 && obj->class_id < 128) {
          num_objects[obj->class_id]++;
        }
        if (appCtx->person_class_id > -1
            && obj->class_id == appCtx->person_class_id) {
          if (strstr (obj->text_params.display_text, "Man")) {
            str_replace (obj->text_params.display_text, "Man", "");
            str_replace (obj->text_params.display_text, "Person", "Man");
            num_male++;
          } else if (strstr (obj->text_params.display_text, "Woman")) {
            str_replace (obj->text_params.display_text, "Woman", "");
            str_replace (obj->text_params.display_text, "Person", "Woman");
            num_female++;
          }
        }
      }
    }
  }
}

My application probe is something like this.

void MetaData::nvdsanalyticsSrcPadBufferProbe(AppCtx * appCtx, GstBuffer * buf,
    NvDsBatchMeta * batch_meta,  guint index) {

        NvDsObjectMeta *obj_meta = nullptr;
        LOG(INFO) << "Number of frames in batch:" << batch_meta->num_frames_in_batch;
}

Called in the main.cpp as follows

or (i = 0; i < num_instances; i++) {
    appCtx[i] = g_malloc0 (sizeof (AppCtx));
    appCtx[i]->person_class_id = -1;
    appCtx[i]->car_class_id = -1;
    appCtx[i]->index = I;
    appCtx[i]->active_source_index = -1;
    if (show_bbox_text) {
      appCtx[i]->show_bbox_text = TRUE;
    }

    if (input_uris && input_uris[i]) {
      appCtx[i]->config.multi_source_config[0].uri =
          g_strdup_printf ("%s", input_uris[i]);
      g_free (input_uris[I]);
    }

    if (!parse_config_file (&appCtx[i]->config, cfg_files[i])) {
      NVGSTDS_ERR_MSG_V ("Failed to parse config file '%s'", cfg_files[I]);
      appCtx[i]->return_value = -1;
      goto done;
    }
  }

  for (i = 0; i < num_instances; i++) {
    if (!create_pipeline (appCtx[i], NULL,
           metadata::MetaData::nvdsanalyticsSrcPadBufferProbe, perf_cb, overlay_graphics)) {
      NVGSTDS_ERR_MSG_V ("Failed to create pipeline”);
      return_value = -1;
      goto done;
    }
  }

Here num_instances is always 1 since there is just one config file? Shouldn’t it be the number of sources.

if (cfg_files) {
    num_instances = g_strv_length (cfg_files);
  }

No. The num_instances means the number of instances(config files) running. The all_bbox_generated api is in the probe of osd plugin. The frames are merged into one by the tiler plugin.

gotcha! My bad, I read the function definition of create_pipeline and plugged in the probe at analytics. Works as expected. Thank you.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.