Line Crossing fails with tailgating

Please provide complete information as applicable to your setup.

• Jetson Orin Nano
• DeepStream Version 7.1
• JetPack Version 6.2
• TensorRT Version 6.2
• Issue Type questions

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
#onnx-file=/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/files/resnet34_peoplenet_int8.onnx
#model-engine-file=/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/files/resnet34_peoplenet_int8.engine
#labelfile-path=/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/files/labels.txt
#int8-calib-file=/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/resnet34_peoplenet_int8.txt


model-engine-file=/home/orin1/Documents/deepstream_python_apps/apps/peoplenet_deployable_quantized_onnx_v2.6.2/resnet34_peoplenet.onnx_b1_gpu0_fp16.engine
#onnx-file=/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/apps/#peoplenet_deployable_quantized_onnx_v2.6.2/resnet34_peoplenet.onnx
labelfile-path=/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/apps/peoplenet_deployable_quantized_onnx_v2.6.2/labels.txt
#int8-calib-file=/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/resnet34_peoplenet_int8.txt
#force-implicit-batch-dim=1
infer-dim=3;544;960
tlt-model-key=tlt_encode
network-type=0
#batch-size=1
process-mode=1
model-color-format=0
maintain-aspect-ratio=0
output-tensor-meta=0
network-mode=2
num-detected-classes=3
interval=0
gie-unique-id=1
#output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid


#Configuring detected classes 
[class-attrs-all] 
pre-cluster-threshold=0.2
post-cluster-threshold=0.2
roi-top-offset=0
roi-bottom-offset=0
%YAML:1.0
################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2020-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################


BaseConfig:
  minDetectorConfidence: 0   # If the confidence of a detector bbox is lower than this, then it won't be considered for tracking


TargetManagement:
  #enableBboxUnClipping: 1   # In case the bbox is likely to be clipped by image border, unclip bbox
  enableBboxUnClipping: 0   # In case the bbox is likely to be clipped by image border, unclip bbox
  maxTargetsPerStream: 150  # Max number of targets to track per stream. Recommended to set >10. Note: this value should account for the targets being tracked in shadow mode as well. Max value depends on the GPU memory capacity


  # [Creation & Termination Policy]
  #minIouDiff4NewTarget: 0.5   # If the IOU between the newly detected object and any of the existing targets is higher than this threshold, this newly detected object will be discarded.
  minIouDiff4NewTarget: 0.75 
  minTrackerConfidence: 0.1   # If the confidence of an object tracker is lower than this on the fly, then it will be tracked in shadow mode. Valid Range: [0.0, 1.0]
  probationAge: 3 # If the target's age exceeds this, the target will be considered to be valid.
  maxShadowTrackingAge: 30  # Max length of shadow tracking. If the shadowTrackingAge exceeds this limit, the tracker will be terminated.
  earlyTerminationAge: 1   # If the shadowTrackingAge reaches this threshold while in TENTATIVE period, the target will be terminated prematurely.


TrajectoryManagement:
  useUniqueID: 0   # Use 64-bit long Unique ID when assignining tracker ID. Default is [true]


DataAssociator:
  dataAssociatorType: 0 # the type of data associator among { DEFAULT= 0 }
  associationMatcherType: 0 # the type of matching algorithm among { GREEDY=0, GLOBAL=1 }
  checkClassMatch: 1  # If checked, only the same-class objects are associated with each other. Default: true


  # [Association Metric: Thresholds for valid candidates]
  minMatchingScore4Overall: 0.0   # Min total score
  minMatchingScore4SizeSimilarity: 0.75  # Min bbox size similarity score
  minMatchingScore4Iou: 0.0       # Min IOU score
  minMatchingScore4VisualSimilarity: 0.4  # Min visual similarity score


  # [Association Metric: Weights]
  matchingScoreWeight4VisualSimilarity: 0.6  # Weight for the visual similarity (in terms of correlation response ratio)
  matchingScoreWeight4SizeSimilarity: 0.0    # Weight for the Size-similarity score
  #matchingScoreWeight4Iou: 0.4   # Weight for the IOU score
  matchingScoreWeight4Iou: 0.6   # Weight for the IOU score


StateEstimator:
  stateEstimatorType: 1  # the type of state estimator among { DUMMY=0, SIMPLE=1, REGULAR=2 }


  # [Dynamics Modeling]
  processNoiseVar4Loc: 2.0    # Process noise variance for bbox center
  processNoiseVar4Size: 1.0   # Process noise variance for bbox size
  processNoiseVar4Vel: 0.1    # Process noise variance for velocity
  measurementNoiseVar4Detector: 4.0    # Measurement noise variance for detector's detection
  measurementNoiseVar4Tracker: 16.0    # Measurement noise variance for tracker's localization


VisualTracker:
  visualTrackerType: 1 # the type of visual tracker among { DUMMY=0, NvDCF=1 }


  # [NvDCF: Feature Extraction]
  useColorNames: 1     # Use ColorNames feature
  useHog: 0            # Use Histogram-of-Oriented-Gradient (HOG) feature
  featureImgSizeLevel: 2  # Size of a feature image. Valid range: {1, 2, 3, 4, 5}, from the smallest to the largest
  featureFocusOffsetFactor_y: -0.2 # The offset for the center of hanning window relative to the feature height. The center of hanning window would move by (featureFocusOffsetFactor_y*featureMatSize.height) in vertical direction


  # [NvDCF: Correlation Filter]
  filterLr: 0.075 # learning rate for DCF filter in exponential moving average. Valid Range: [0.0, 1.0]
  filterChannelWeightsLr: 0.1 # learning rate for the channel weights among feature channels. Valid Range: [0.0, 1.0]
  gaussianSigma: 0.75 # Standard deviation for Gaussian for desired response when creating DCF filter [pixels]
  
  
  
# [State Estimator] [MovingAvgEstimator]
trackExponentialSmoothingLr_loc: 0.9       # Learning rate for new location
trackExponentialSmoothingLr_scale: 0.9     # Learning rate for new scale
trackExponentialSmoothingLr_velocity: 0.9  # Learning rate for new velocity  

line crossing analytics is based on detection bboxes. Since the tailgating person is not detected, this is a model accuracy issue. you can use a new detection model with a better accuracy.

Hey @fanzh , Thanks for the quick response. Currently I am using peoplenet_deployable_quantized_onnx_v2.6.2 from the nvidia portal. Does Nvidia have a better model for person detection? or do I need to fine tune the existing model?

Yes. you need to fine tune the existing model for detecting tailgating person.

Thanks alot for your help. If i have any more issues ill create another thread