Does "KLT Tracker Init" affect the performance of custom models?

• Hardware Platform (Jetson / GPU): 2080Ti or TITAN V
• DeepStream Version: 5.0
• TensorRT Version: 7.0.0-1+cuda10.2
• NVIDIA GPU Driver Version: 440.33.01

Recently I tried to build a face recognition detector based on the sample program deepstream-test5. The model of the face recognition detector comes from: RetinaFace. I compiled the corresponding .engine file according to the project’s READMA, completed and compiled the .so file containing the “parse-bbox-func” function according to the DS SDK, and modified the config.txt file.

But during execution, I found that my program could hardly make any inferences. I looked at the output and found that the model’s confidence in almost all Anchor predictions is 0.5. After checking the entire process of the entire pipeline, I made the model output the prediction results of the positive and negative classes before the final Softmax layer. The result is surprising:

..............
conf1: 4.070312; conf2: -3.796875
conf1: 4.609375; conf2: -4.125000
conf1: 4.750000; conf2: -4.492188
conf1: 4.601562; conf2: -4.429688
conf1: 4.460938; conf2: -4.164062
conf1: 3.728516; conf2: -3.623047
conf1: 4.351562; conf2: -4.085938
conf1: 4.359375; conf2: -3.958984
conf1: 4.253906; conf2: -3.611328
KLT Tracker Init
conf1: 0.000000; conf2: 0.000000
conf1: 0.000000; conf2: 0.000000
conf1: 0.000000; conf2: 0.000000
conf1: 0.000000; conf2: 0.000000
conf1: 0.000000; conf2: 0.000000
conf1: 0.000000; conf2: 0.000000
conf1: 0.000000; conf2: 0.000000
conf1: 0.000000; conf2: 0.000000
conf1: 0.000000; conf2: 0.000000
conf1: 0.000004; conf2: 0.000105
conf1: 0.000000; conf2: 0.000000
.....................

Before “KLT Tracker Init”, the model worked correctly, but after that the model could hardly predict any results. Does this tracker affect any aspect of the entire program?

On the other hand, when I disable this part, my program still has the same problem. It’s just that “KLT Tracker Init” is not displayed anymore.


Here is part my my config file

test5_config_file_src_infer.txt

[streammux]
gpu-id=1
live-source=0
batch-size=4
batched-push-timeout=40000
width=1600
height=928
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=1
batch-size=4
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;1;1;1
bbox-border-color3=0;1;0;1
nvbuf-memory-type=0
interval=0
gie-unique-id=1
model-engine-file=/shared/tensorrtx/retinaface/build/retina_r50_CUDA1.engine
labelfile-path=labels.txt
config-file=config_infer_primary.txt

[tracker]
enable=0
tracker-width=600
tracker-height=288
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
gpu-id=1
enable-batch-process=0

config_infer_primary.txt

[property]
gpu-id=1
net-scale-factor=1
offsets=117.0;104.0;123.0
model-engine-file=/shared/tensorrtx/retinaface/build/retina_r50_CUDA1.engine
batch-size=30
process-mode=1
model-color-format=1
network-mode=2
num-detected-classes=1
network-type=0
interval=0
gie-unique-id=1
output-blob-names=prob
force-implicit-batch-dim=1
parse-bbox-func-name=NvDsInferParseCustomRFace
custom-lib-path=…/nvdsinfer_custom_impl_RetinaFace/libnvdsinfer_custom_impl_RetinaFace.so

[class-attrs-all]
pre-cluster-threshold=0.2

THX!!!

So this issue is not related to Tracker, I don’t think it will affect your model.