Cannot run yolov4 model in secondary inference

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson nano
• DeepStream Version 6.0.1
• JetPack Version (valid for Jetson only) 4.6.1 GA
• TensorRT Version 8

Recently, I have trained two models YOLOv4 by TAO Toolkit.
The first model detects pig and food container
The second model sees pig-lie and pig-stand
My target is to mix two models to track pigs and determine their status activity.
The first model is the primary inference and the second model is the secondary.
I use deepstream-test2 example because I see they are nearly the same.
In this example, they are using an h264 stream, I edited the file deepstream_test_2.py to use an mp4 file and it works with default configs.
After I run this example with the file mp4 successfully, I start editing the config file to run my custom model.
When editing over, I run this example, the primary inference worked but the secondary didn’t work or the video does not show the result.

My primary config is here:

[property]
gpu-id=0
net-scale-factor=1.0
model-engine-file=/home/jetsontn/Downloads/export_PigAndFeeder/yolov4_resnet18_epoch_180.etlt_b1_gpu0_fp16.engine
labelfile-path=/home/jetsontn/Downloads/export_PigAndFeeder/labels.txt
tlt-model-key=nvidia_tlt
offsets=103.939;116.779;123.68
infer-dims=3;704;704

force-implicit-batch-dim=1
batch-size=1
network-mode=1
process-mode=1
model-color-format=1
num-detected-classes=2
interval=0
gie-unique-id=1 #1

output-blob-names=BatchedNMS
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infercustomparser.so

[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
group-threshold=1

And my secondary

[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68

model-engine-file=/home/jetsontn/Downloads/export_LieAndStand/yolov4_resnet18_epoch_500.etlt_b1_gpu0_fp16.engine
labelfile-path=/home/jetsontn/Downloads/export_LieAndStand/labels.txt

force-implicit-batch-dim=1
batch-size=1

network-mode=1
input-object-min-width=20 # 64
input-object-min-height=20 #64
process-mode=2
model-color-format=1
gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0;1
is-classifier=1

output-blob-names=BatchedNMS
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infercustomparser.so

classifier-async-mode=1
classifier-threshold=0.51
process-mode=2

Please help me

Seems you use async-mode, can you have a try to disable it? Can you conform “operate-on-class-ids=0;1”?

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.