DeepSORT ReID is not working in DeepStream6.2

use DeepSORT ReID is not working in DeepStream6.1 - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums mothed, have the same question:the tracker ID change frequently.
the model I can used the peoplenet and default resnet. and i modify the yml file

%YAML:1.0
################################################################################
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################

BaseConfig:
  minDetectorConfidence: 0.3    # If the confidence of a detector bbox is lower than this, then it won't be considered for tracking

TargetManagement:
  preserveStreamUpdateOrder: 0    # When assigning new target ids, preserve input streams' order to keep target ids in a deterministic order over multuple runs
  maxTargetsPerStream: 150    # Max number of targets to track per stream. Recommended to set >10. Note: this value should account for the targets being tracked in shadow mode as well. Max value depends on the GPU memory capacity

  # [Creation & Termination Policy]
  minIouDiff4NewTarget: 0.5   # If the IOU between the newly detected object and any of the existing targets is higher than this threshold, this newly detected object will be discarded.
  minTrackerConfidence: 0.2   # If the confidence of an object tracker is lower than this on the fly, then it will be tracked in shadow mode. Valid Range: [0.0, 1.0]
  probationAge: 20    # If the target's age exceeds this, the target will be considered to be valid.
  maxShadowTrackingAge: 650    # Max length of shadow tracking. If the shadowTrackingAge exceeds this limit, the tracker will be terminated.
  earlyTerminationAge: 1    # If the shadowTrackingAge reaches this threshold while in TENTATIVE period, the the target will be terminated prematurely.

TrajectoryManagement:
  useUniqueID: 1    # Use 64-bit long Unique ID when assignining tracker ID.
  enableReAssoc:1
  maxTargetsPerStream: 99

DataAssociator:
  dataAssociatorType: 0    # the type of data associator among { DEFAULT= 0 }
  associationMatcherType: 0    # the type of matching algorithm among { GREEDY=0, CASCADED=1 }
  checkClassMatch: 1    # If checked, only the same-class objects are associated with each other. Default: true

  # [Association Metric: Mahalanobis distance threshold (refer to DeepSORT paper) ]
  # thresholdMahalanobis: 16.3102    # Threshold of Mahalanobis distance. A detection and a target are not matched if their distance is larger than the threshold.

  # [Association Metric: Thresholds for valid candidates]
  minMatchingScore4Overall: 0.8    # Min total score
  minMatchingScore4SizeSimilarity: 0.6    # Min bbox size similarity score
  minMatchingScore4Iou: 0    # Min IOU score
  #minMatchingScore4ReidSimilarity: 0.6182    # Min reid similarity score
  thresholdMahalanobis: 9.4877    # Max Mahalanobis distance based on Chi-square probabilities

  # [Association Metric: Weights for valid candidates]
  # matchingScoreWeight4SizeSimilarity: 0.8207    # Weight for the Size-similarity score
  # matchingScoreWeight4Iou: 0.3811    # Weight for the IOU score
  # matchingScoreWeight4ReidSimilarity: 0.7377    # Weight for the reid similarity

  # [Association Metric: Tentative detections] only uses iou similarity for tentative detections
  # tentativeDetectorConfidence: 0.2241    # If a detection's confidence is lower than this but higher than minDetectorConfidence, then it's considered as a tentative detection
  # minMatchingScore4TentativeIou: 0.2104    # Min iou threshold to match targets and tentative detection

StateEstimator:
  stateEstimatorType: 2    # the type of state estimator among { DUMMY=0, SIMPLE=1, REGULAR=2 }

  # [Dynamics Modeling]
  noiseWeightVar4Loc: 0.05   # weight of process and measurement noise for bbox center; if set, location noise will be proportional to box height
  noiseWeightVar4Vel: 0.00625    # weight of process and measurement noise for velocity; if set, velocity noise will be proportional to box height
  useAspectRatio: 1    # use aspect ratio in Kalman filter's observation

ReID:
  reidType: 1    # The type of reid among { DUMMY=0, DEEP=1 }

  # [Reid Network Info]
  batchSize: 100    # Batch size of reid network
  workspaceSize: 1000    # Workspace size to be used by reid engine, in MB
  reidFeatureSize: 128    # Size of reid feature
  reidHistorySize: 100    # Max number of reid features kept for one object
  inferDims: [128, 64, 3]    # Reid network input dimension CHW or HWC based on inputOrder
  inputOrder: 1 # reid network input order among { NCHW=0, NHWC=1 }
  colorFormat: 0 # reid network input color format among {RGB=0, BGR=1 }
  networkMode: 0    # Reid network inference precision mode among {fp32=0, fp16=1, int8=2 }

  # [Input Preprocessing]
  #inputOrder: 1    # Reid network input order among { NCHW=0, NHWC=1 }. Batch will be converted to the specified order before reid input.
  #colorFormat: 0    # Reid network input color format among {RGB=0, BGR=1 }. Batch will be converted to the specified color before reid input.
  offsets: [0.0, 0.0, 0.0]    # Array of values to be subtracted from each input channel, with length equal to number of channels
  netScaleFactor: 1.0000    # Scaling factor for reid network input after substracting offsets
  #keepAspc: 1    # Whether to keep aspc ratio when resizing input objects for reid

  # [Paths and Names]
  inputBlobName: "images"    # Reid network input layer name
  outputBlobName: "features"    # Reid network output layer name
  uffFile: "/opt/nvidia/deepstream/deepstream-6.2/samples/models/Tracker/mars-small128.uff"    # Absolute path to reid network uff model
  modelEngineFile: "/opt/nvidia/deepstream/deepstream-6.2/samples/models/Tracker/mars-small128.uff_b100_gpu0_fp32.engine"    # Engine file path
  keepAspc: 1 # whether to keep aspc ratio when resizing input objects for reid
  # calibrationTableFile: "/opt/nvidia/deepstream/deepstream/samples/models/Tracker/calibration.cache" # Calibration table path, only for int32

**• Hardware Platform :Jetson
**• DeepStream Version:6.2

the ReID update is too frequently。I saw a comment on this website ( DeepSORT ReID is not working in DeepStream6.1 - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums)saying that changing to deepstream 6.2 would be better, but not , please help

Can you have a try with this: Gst-nvtracker — DeepStream 6.2 Release documentation
Or you can share your reproduce to us to have a check.

i use /opt/nvidia/deepstream/deepstream-6.2/samples/configs/tao_pretrained_models/download_models.sh to download model and config, run deepstream-app -c deepstream_app_source1_peoplenet_rstp.txt

deepstream_app_source1_peoplenet_rstp.txt

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=1

[tiled-display]
enable=1
rows=1
columns=1
width=1920
height=1080
gpu-id=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
uri=rtsp://admin:123456@192.168.1.71/h265/ch1/main/av_stream
num-sources=1
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[streammux]
gpu-id=0
live-source=0
batch-size=2
batched-push-timeout=33000
## Set muxer output width and height
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=0
gpu-id=0
#1=h264 2=h265 3=mpeg4
codec=2
profile=0
output-file=out.mp4
enc-type=0

[osd]
enable=1
gpu-id=0
border-width=3
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial

[primary-gie]
enable=1
gpu-id=0
# Modify as necessary
model-engine-file=../../models/tao_pretrained_models/peopleNet/V2.6/resnet34_peoplenet_int8.etlt_b1_gpu0_int8.engine
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
gie-unique-id=1
config-file=config_infer_primary_peoplenet.txt

[sink1]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
bitrate=2000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
output-file=out.mp4
source-id=0

[sink2]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=4
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
bitrate=4000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400

[tracker]
enable=1
# For NvDCF and DeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=../deepstream-app/config_tracker_IOU.yml
#ll-config-file=../deepstream-app/config_tracker_NvDCF_perf.yml
# ll-config-file=../deepstream-app/config_tracker_NvDCF_accuracy.yml
ll-config-file=../deepstream-app/config_tracker_NvDeepSORT_new.yml
gpu-id=0
enable-batch-process=1
enable-past-frame=1
display-tracking-id=1
[tests]
file-loop=0

config_infer_primary_peoplenet.txt

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
tlt-model-key=tlt_encode
tlt-encoded-model=../../models/tao_pretrained_models/peopleNet/V2.6/resnet34_peoplenet_int8.etlt
labelfile-path=./labels_peoplenet.txt
model-engine-file=../../models/tao_pretrained_models/peopleNet/V2.6/resnet34_peoplenet_int8.etlt_b1_gpu0_int8.engine
int8-calib-file=../../models/tao_pretrained_models/peopleNet/V2.6/resnet34_peoplenet_int8.txt
input-dims=3;544;960;0
uff-input-blob-name=input_1
batch-size=1
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=3
cluster-mode=3
interval=0
gie-unique-id=1
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid

#Use the config params below for dbscan clustering mode

#Use the config params below for NMS clustering mode
[class-attrs-all]
#topk=20
#nms-iou-threshold=0.5
#pre-cluster-threshold=0.2
pre-cluster-threshold=0.1696
nms-iou-threshold=0.5196
minBoxes=2
dbscan-min-score=1.4226
eps=0.2280
detected-min-w=20
detected-min-h=20

## Per class configurations
[class-attrs-0]
topk=20
nms-iou-threshold=0.5
pre-cluster-threshold=0.4

#[class-attrs-1]
#pre-cluster-threshold=0.05
#eps=0.7
#dbscan-min-score=0.5

config_tracker_NvDeepSORT_new.yml

BaseConfig:
  minDetectorConfidence: 0  # If the confidence of a detector bbox is lower than this, then it won't be considered for tracking

TargetManagement:
  preserveStreamUpdateOrder: 0    # When assigning new target ids, preserve input streams' order to keep target ids in a deterministic order over multuple runs
  maxTargetsPerStream: 150    # Max number of targets to track per stream. Recommended to set >10. Note: this value should account for the targets being tracked in shadow mode as well. Max value depends on the GPU memory capacity

  # [Creation & Termination Policy]
  minIouDiff4NewTarget: 0.5   # If the IOU between the newly detected object and any of the existing targets is higher than this threshold, this newly detected object will be discarded.
  minTrackerConfidence: 0.2   # If the confidence of an object tracker is lower than this on the fly, then it will be tracked in shadow mode. Valid Range: [0.0, 1.0]
  probationAge: 5    # If the target's age exceeds this, the target will be considered to be valid.
  maxShadowTrackingAge: 500    # Max length of shadow tracking. If the shadowTrackingAge exceeds this limit, the tracker will be terminated.
  earlyTerminationAge: 1    # If the shadowTrackingAge reaches this threshold while in TENTATIVE period, the the target will be terminated prematurely.

TrajectoryManagement:
  useUniqueID: 10   # Use 64-bit long Unique ID when assignining tracker ID.
  #enableReAssoc:1
  #maxTargetsPerStream: 99

DataAssociator:
  dataAssociatorType: 0    # the type of data associator among { DEFAULT= 0 }
  associationMatcherType: 0    # the type of matching algorithm among { GREEDY=0, CASCADED=1 }
  checkClassMatch: 1    # If checked, only the same-class objects are associated with each other. Default: true

  # [Association Metric: Mahalanobis distance threshold (refer to DeepSORT paper) ]
  # thresholdMahalanobis: 16.3102    # Threshold of Mahalanobis distance. A detection and a target are not matched if their distance is larger than the threshold.

  # [Association Metric: Thresholds for valid candidates]
  minMatchingScore4Overall: 0.8    # Min total score
  minMatchingScore4SizeSimilarity: 0.6    # Min bbox size similarity score
  minMatchingScore4Iou: 0    # Min IOU score
  #minMatchingScore4ReidSimilarity: 0.6182    # Min reid similarity score
  thresholdMahalanobis: 9.4877    # Max Mahalanobis distance based on Chi-square probabilities

  # [Association Metric: Weights for valid candidates]
  # matchingScoreWeight4SizeSimilarity: 0.8207    # Weight for the Size-similarity score
  # matchingScoreWeight4Iou: 0.3811    # Weight for the IOU score
  # matchingScoreWeight4ReidSimilarity: 0.7377    # Weight for the reid similarity

  # [Association Metric: Tentative detections] only uses iou similarity for tentative detections
  # tentativeDetectorConfidence: 0.2241    # If a detection's confidence is lower than this but higher than minDetectorConfidence, then it's considered as a tentative detection
  # minMatchingScore4TentativeIou: 0.2104    # Min iou threshold to match targets and tentative detection

StateEstimator:
  stateEstimatorType: 2    # the type of state estimator among { DUMMY=0, SIMPLE=1, REGULAR=2 }

  # [Dynamics Modeling]
  noiseWeightVar4Loc: 0.05   # weight of process and measurement noise for bbox center; if set, location noise will be proportional to box height
  noiseWeightVar4Vel: 0.00625    # weight of process and measurement noise for velocity; if set, velocity noise will be proportional to box height
  useAspectRatio: 1    # use aspect ratio in Kalman filter's observation

ReID:
  reidType: 1    # The type of reid among { DUMMY=0, DEEP=1 }

  # [Reid Network Info]
  batchSize: 100    # Batch size of reid network
  workspaceSize: 1000    # Workspace size to be used by reid engine, in MB
  reidFeatureSize: 128    # Size of reid feature
  reidHistorySize: 100    # Max number of reid features kept for one object
  inferDims: [128, 64, 3]    # Reid network input dimension CHW or HWC based on inputOrder
  inputOrder: 1 # reid network input order among { NCHW=0, NHWC=1 }
  colorFormat: 0 # reid network input color format among {RGB=0, BGR=1 }
  networkMode: 0    # Reid network inference precision mode among {fp32=0, fp16=1, int8=2 }

  # [Input Preprocessing]
  #inputOrder: 1    # Reid network input order among { NCHW=0, NHWC=1 }. Batch will be converted to the specified order before reid input.
  #colorFormat: 0    # Reid network input color format among {RGB=0, BGR=1 }. Batch will be converted to the specified color before reid input.
  offsets: [0.0, 0.0, 0.0]    # Array of values to be subtracted from each input channel, with length equal to number of channels
  netScaleFactor: 1.0000    # Scaling factor for reid network input after substracting offsets
  #keepAspc: 1    # Whether to keep aspc ratio when resizing input objects for reid

  # [Paths and Names]
  inputBlobName: "images"    # Reid network input layer name
  outputBlobName: "features"    # Reid network output layer name
  uffFile: "/opt/nvidia/deepstream/deepstream-6.2/samples/models/Tracker/mars-small128.uff"    # Absolute path to reid network uff model
  modelEngineFile: "/opt/nvidia/deepstream/deepstream-6.2/samples/models/Tracker/mars-small128.uff_b100_gpu0_fp32.engine"    # Engine file path
  keepAspc: 1 # whether to keep aspc ratio when resizing input objects for reid
  # calibrationTableFile: "/opt/nvidia/deepstream/deepstream/samples/models/Tracker/calibration.cache" # Calibration table path, only for int32

all model or app is nvidia not modify

Can you have a try with: /opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml
Can you dump video and share to us to reproduce the ID switch?

config_tracker_NvDCF_accuracy.yml can not detect any target

dump video and share to us to reproduce the ID switch:
20230418_191138.wmv (29.1 MB)
look at the left person, the id change frequently.

please help, why

Can you have a try with peoplenet + NvDCF? You can see we can get good result with the same video in the guide: Gst-nvtracker — DeepStream 6.2 Release documentation

PeopleNet + NvDCF do not tracker person target

PeopleNet will detect person. Please have try with peoplenet + NvDCF? You can see we can get good result with the same video in the guide which mention in above.

ok , the method is ok , thanke you

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.