DeepSORT ReID is not working properly

Hardware Platform (GPU)
DeepStream Version: 6.3
TensorRT Version:8.x
NVIDIA GPU Driver Version (valid for GPU only):cuda 12.0
Issue Type( questions, new requirements, bugs):questions

I’m creating this Topic and it seems this topic has not been solved for long.
DeepSORT ReID is not working in DeepStream6.1
DeepSORT ReID is not working
(Closed topics without solving.)

I have followed people tracking in DeepStream Docs.
When I run People Tracking (PeopleNet+NvDeepSORT) with the videofile that user3928 shared before, this DeepStream pipeline can’t track persons who went out of view and came back.

I’m trying hard to make NvDeepSort work changing parameters (maxShadowTrackingAge, earlyTerminationAge, minMatchingScore4SizeSimilarity etc…) but it’s still not working.

Input Video
My current result with some parameter update

DeepSort algorithm uses mahalanobis distance, appearance descriptor, iou so the pipeline must be able to track person who went out and come in.

How can I solve it?

Can you upgrade to latest DeepStream release 6.4 and use config_tracker_NvDCF_accuracy.yml: Gst-nvtracker — DeepStream documentation 6.4 documentation

I have reinstalled every thing with DeepStream 6.4.
And as you said, tested with config_tracker_NvDCF_accuracy.yml.

the result is still same.

Result Video

Note that,
My command is

deepstream-app -c deepstream_app_config.txt

I used default “samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml”

and other configs are below.

/opt/nvidia/deepstream/deepstream-6.4/samples/configs/deepstream-app/deepstream_app_config.txt

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=file:///home/chanwoong/Resources/video/etc/reid_01.avi
num-sources=1
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink/nv3dsink (Jetson only) 3=File
type=2
sync=1
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
#iframeinterval=10
bitrate=2000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
# set profile only for hw encoder, sw encoder selects profile based on sw-preset
profile=0
output-file=out.mp4
source-id=0

[sink2]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
#sw-preset=1 #for SW enc=(0)None (1)ultrafast (2)superfast (3)veryfast (4)faster
#(5)fast (6)medium (7)slow (8)slower (9)veryslow (10)placebo
sync=0
#iframeinterval=10
bitrate=400000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
# set profile only for hw encoder, sw encoder selects profile based on sw-preset
profile=0
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=1
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
buffer-pool-size=4
batch-size=4
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
model-engine-file=../../models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
batch-size=4
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
gie-unique-id=1
nvbuf-memory-type=0
#config-file=config_infer_primary.txt
config-file=config_infer_primary_PeopleNet.txt

[tracker]
enable=1
# For NvDCF and NvDeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=960
tracker-height=544
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=config_tracker_IOU.yml
# ll-config-file=config_tracker_NvSORT.yml
# ll-config-file=config_tracker_NvDCF_perf.yml
 ll-config-file=config_tracker_NvDCF_accuracy.yml
# ll-config-file=config_tracker_NvDeepSORT.yml
gpu-id=0
display-tracking-id=1

[tests]
file-loop=1

/opt/nvidia/deepstream/deepstream-6.4/samples/configs/deepstream-app/config_infer_primary_PeopleNet.txt

[property]
## model-specific params. The paths will be different if the user sets up in different directory.
int8-calib-file=../../models/peoplenet/resnet34_peoplenet_int8.txt
labelfile-path=../../models/peoplenet/labels.txt
tlt-encoded-model=../../models/peoplenet/resnet34_peoplenet_int8.etlt
tlt-model-key=tlt_encode

gpu-id=0
net-scale-factor=0.0039215697906911373
input-dims=3;544;960;0
uff-input-blob-name=input_1
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=3
interval=0
gie-unique-id=1
output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
## 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
cluster-mode=3
maintain-aspect-ratio=1

[class-attrs-all]
pre-cluster-threshold=0.1429
nms-iou-threshold=0.4688
minBoxes=3
dbscan-min-score=0.7726
eps=0.2538
detected-min-w=20
detected-min-h=20

I hope this issue would not be closed without solution.
Is there any help?

Is there any help?

Please check out the documentation, stating

" The Re-ID has a spatial-temporal constraint. If an object moves out of frame or gets occluded beyond maxShadowTrackingAge , it will be assigned a new ID even if it returns into the frame."

Basically, what you mentioned in the question is not what DS can’t do; it’s actually DS don’t do for now.

If this is a required feature, we can plan adding that support. But for now, an object that goes out of the frame and re-appears would have different IDs. The association of these two IDs is something users can do as a post-processing.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.