Segregate ROI(multiple) level detection and run individual trackers and indicate this in kafka payload

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): GPU
• DeepStream Version: 7.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only): 12.0
• Issue Type( questions, new requirements, bugs): bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

How to reproduce the issue:

I have modified deepstream-test5 app from the samples, the pipeline performs the following things, its reads from a video file, do preprocessing(using nvdspreprocess) having two rois, runs the object detection and then object tracking, then one sink is defined to communicate with Kafka and one sink is defined to save the annotated video, I am pasting the relevant configs also.

The issue is that all the detection info is used collectively for tracking(i.e effectively one instance of tracker), I want to have two instance of tracker i.e separate tracker for both ROI, also I want to indicate this ROI info in the final kafka message, i.e the which tracker box is from which ROI.

I am pasting all the configs here:

main config:

################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: LicenseRef-NvidiaProprietary
#
# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
# property and proprietary rights in and to this material, related
# documentation and any modifications thereto. Any use, reproduction,
# disclosure or distribution of this material and related documentation
# without an express license agreement from NVIDIA CORPORATION or
# its affiliates is strictly prohibited.
################################################################################

application:
  enable-perf-measurement: 1
  perf-measurement-interval-sec: 1
  #gie-kitti-output-dir: streamscl

source:
 csv-file-path: sources_5.csv

sink0:
  enable: 1
  #Type - 1=FakeSink 2=EglSink 3=File
  type: 1
  sync: 0
  source-id: 0
  gpu-id: 0
  nvbuf-memory-type: 0

sink1:
  enable: 1
  #Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvdrmvideosink 6=MsgConvBroker
  type: 6
  # msg2p-newapi: 1 
  # frame-interval: 60
  msg-conv-config: dstest5_msgconv_sample_config.yml
  msg-conv-msg2p-new-api: 1
  #(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
  #(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
  #(256): PAYLOAD_RESERVED - Reserved type
  #(257): PAYLOAD_CUSTOM   - Custom schema payload
  msg-conv-payload-type: 1
  msg-conv-frame-interval: 1
  msg-broker-proto-lib: /opt/nvidia/deepstream/deepstream-7.0/lib/libnvds_kafka_proto.so
  #Provide your msg-broker-conn-str here
  msg-broker-conn-str: localhost;9092;quickstart-events
  # topic: <topic>
  #Optional:
  #msg-broker-config: ../../deepstream-test4/cfg_kafka.txt

sink2:
  enable: 1
  type: 3
  #1=mp4 2=mkv
  container: 1
  #1=h264 2=h265 3=mpeg4
  ## only SW mpeg4 is supported right now.
  codec: 1
  sync: 1
  bitrate: 2000000
  # output-file:  /out_videos/nvds_analytics_roi_final_output12.mp4
  output-file:  /out_videos/nvds_preprocess_multi_roi_final_output2.mp4
  source-id: 0

osd:
  enable: 1
  gpu-id: 0
  border-width: 1
  text-size: 15
  text-color: 1;1;1;1
  text-bg-color: 0.3;0.3;0.3;1
  font: Arial
  show-clock: 0
  clock-x-offset: 800
  clock-y-offset: 820
  clock-text-size: 12
  clock-color: 1;0;0;0
  nvbuf-memory-type: 0

streammux:
  gpu-id: 0
  ##Boolean property to inform muxer that sources are live
  live-source: 0
  batch-size: 1
  ##time out in usec, to wait after the first buffer is available
  ##to push the batch even if the complete batch is not formed
  batched-push-timeout: 40000
  ## Set muxer output width and height
  width: 1920
  height: 1080
  ##Enable to maintain aspect ratio wrt source, and allow black borders, works
  ##along with width, height properties
  enable-padding: 0
  nvbuf-memory-type: 0
  ## If set to TRUE, system timestamp will be attached as ntp timestamp
  ## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
  # attach-sys-ts-as-ntp: 1

pre-process:
  enable: 1
  config-file: config_preprocess.txt


primary-gie:
  enable: 1
  gpu-id: 0
  batch-size: 2
  input-tensor-meta: 1      # keep 0 for full frame inference
  ## 0=FP32, 1=INT8, 2=FP16 mode
  bbox-border-color0: 1;0;0;1
  bbox-border-color1: 0;1;1;1
  bbox-border-color2: 0;1;1;1
  bbox-border-color3: 0;1;0;1
  nvbuf-memory-type: 0
  interval: 0
  config-file: pgie_peoplenet_tao_config.txt
  #infer-raw-output-dir: ../../../../../samples/primary_detector_raw_output/

tracker:
  enable: 1
  # For NvDCF and NvDeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
  tracker-width: 960
  tracker-height: 544
  ll-lib-file: /opt/nvidia/deepstream/deepstream-7.0/lib/libnvds_nvmultiobjecttracker.so
  # ll-config-file required to set different tracker types
  # ll-config-file: ../../../../../samples/configs/deepstream-app/config_tracker_IOU.yml
  # ll-config-file: ../../../../../samples/configs/deepstream-app/config_tracker_NvSORT.yml
  # ll-config-file: ../../../../../samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml
  # ll-config-file: ../../../../../samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml
  ll-config-file: ../../../../../samples/configs/deepstream-app/config_tracker_NvDeepSORT.yml
  gpu-id: 0
  display-tracking-id: 1


tests:
  file-loop: 0

config_preprocess.txt

# SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: LicenseRef-NvidiaProprietary
#
# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
# property and proprietary rights in and to this material, related
# documentation and any modifications thereto. Any use, reproduction,
# disclosure or distribution of this material and related documentation
# without an express license agreement from NVIDIA CORPORATION or
# its affiliates is strictly prohibited.

# The values in the config file are overridden by values set through GObject
# properties.

[property]
enable=1
    # list of component gie-id for which tensor is prepared
target-unique-ids=1
    # 0=NCHW, 1=NHWC, 2=CUSTOM
network-input-order=0
    # 0=process on objects 1=process on frames
process-on-frame=1
    #uniquely identify the metadata generated by this element
unique-id=5
    # gpu-id to be used
gpu-id=0
    # if enabled maintain the aspect ratio while scaling
maintain-aspect-ratio=1
    # if enabled pad symmetrically with maintain-aspect-ratio enabled
symmetric-padding=1
    # processig width/height at which image scaled
processing-width=960
processing-height=544
    # max buffer in scaling buffer pool
scaling-buf-pool-size=6
    # max buffer in tensor buffer pool
tensor-buf-pool-size=6
    # tensor shape based on network-input-order
network-input-shape= 8;3;544;960
    # 0=RGB, 1=BGR, 2=GRAY
network-color-format=0
    # 0=FP32, 1=UINT8, 2=INT8, 3=UINT32, 4=INT32, 5=FP16
tensor-data-type=0
    # tensor name same as input layer name
tensor-name=input_1
    # 0=NVBUF_MEM_DEFAULT 1=NVBUF_MEM_CUDA_PINNED 2=NVBUF_MEM_CUDA_DEVICE 3=NVBUF_MEM_CUDA_UNIFIED
scaling-pool-memory-type=0
    # 0=NvBufSurfTransformCompute_Default 1=NvBufSurfTransformCompute_GPU 2=NvBufSurfTransformCompute_VIC
scaling-pool-compute-hw=0
    # Scaling Interpolation method
    # 0=NvBufSurfTransformInter_Nearest 1=NvBufSurfTransformInter_Bilinear 2=NvBufSurfTransformInter_Algo1
    # 3=NvBufSurfTransformInter_Algo2 4=NvBufSurfTransformInter_Algo3 5=NvBufSurfTransformInter_Algo4
    # 6=NvBufSurfTransformInter_Default
scaling-filter=0
    # custom library .so path having custom functionality
custom-lib-path=/opt/nvidia/deepstream/deepstream/lib/gst-plugins/libcustom2d_preprocess.so
    # custom tensor preparation function name having predefined input/outputs
    # check the default custom library nvdspreprocess_lib for more info
custom-tensor-preparation-function=CustomTensorPreparation

[user-configs]
   # Below parameters get used when using default custom library nvdspreprocess_lib
   # network scaling factor
pixel-normalization-factor=0.003921568
   # mean file path in ppm format
#mean-file=
   # array of offsets for each channel
#offsets=

[group-0]
src-ids=0
custom-input-transformation-function=CustomAsyncTransformation
process-on-roi=1
draw-roi=1
roi-params-src-0=0;0;600;1000;1000;0;500;1000


pgie_peoplenet_tao_config.txt

################################################################################
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
tlt-model-key=tlt_encode
tlt-encoded-model=peoplenet/resnet34_peoplenet_pruned_int8.etlt
labelfile-path=peoplenet/labels.txt
# model-engine-file=peoplenet/resnet34_peoplenet_pruned_int8.etlt_b1_gpu0_int8.engine
int8-calib-file=peoplenet/resnet34_peoplenet_pruned_int8.txt
#input-dims=3;544;960;0
infer-dims=3;544;960
uff-input-blob-name=input_1
batch-size=1
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=3
cluster-mode=1
interval=0
gie-unique-id=1
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid

[class-attrs-all]
pre-cluster-threshold=0.4
## Set eps=0.7 and minBoxes for cluster-mode=1(DBSCAN)
eps=0.7
minBoxes=1

[class-attrs-1]
pre-cluster-threshold=1.4
## Set eps=0.7 and minBoxes for cluster-mode=1(DBSCAN)
eps=0.7
minBoxes=1
[class-attrs-2]
pre-cluster-threshold=1.4
## Set eps=0.7 and minBoxes for cluster-mode=1(DBSCAN)
eps=0.7
minBoxes=1

I am pasting the snapshot of the current output

duplicate with Issue using nvdsanalytics plugin in deepstream 7.0 for ROI filtering, ROI displayed and working but all boxes are displayed/emmited - #5 by fanzh

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.