No detection after conversion to engine in deepstream

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Orin 64G development kit
• DeepStream Version 7.1
• JetPack Version (valid for Jetson only) 6.1

I have trained model in TAO and had good detection tested with test images (10% of dataset). Detection is good with new images as well.
But after conversion to onnx and engine file. No detection at all tested with deepstream-app.
The config files and model are attached.
rectitude_config_infer_primary.txt (4.0 KB)
rectitude_main.txt (7.2 KB)
What could be wrong?

How did you test the model before using DeepStream? please make sure the preprocessing parameters and postprocessing are correct. please refer to this faq Debug Tips for DeepStream Accuracy Issue. For simplicity, please use deeptream-test1 to debug first.

I have similar training for yolo4 using TAO lib and run on deepstream version 7.0
It works. I can see all detection.
The configuration I used, psl see below.
I used this library libnvds_infercustomparser_tlt.so for version 7.0

[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
onnx-file=../../models/yolo4_resnet18/yolov4_resnet18_epoch_200.onnx
#int8-calib-file=../../models/yolo4_resnet18/cal.bin
labelfile-path=../../models/yolo4_resnet18/labels.txt
model-engine-file=../../models/yolo4_resnet18/yolov4_resnet18_epoch_200.onnx_b2_gpu0_fp16.engine
infer-dims=3;384;1248
maintain-aspect-ratio=1
uff-input-order=0
uff-input-blob-name=Input
batch-size=2
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=3
interval=0
gie-unique-id=1
is-classifier=0
#network-type=0
#no cluster
cluster-mode=3
output-blob-names=BatchedNMS
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=/opt/nvidia/deepstream/deepstream-7.0/sources/apps/sample_apps/deepstream_tlt_apps/post_processor/libnvds_infercustomparser_tlt.so

But now I have new system and deepstream version is 7.1.
I have configuration as belows.

[property]
gpu-id=0
net-scale-factor=0.00392156862745098
offsets=103.939;116.779;123.68
onnx-file=/opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream_reference_apps/deepstream_parallel_inference_app/tritonserver/models/rectitude/1/rectitude_fp16.onnx
model-engine-file=/opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream_reference_apps/deepstream_parallel_inference_app/tritonserver/models/rectitude/1/rectitude_fp16.onnx_b4_gpu0_fp16.engine
labelfile-path=/opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream_reference_apps/deepstream_parallel_inference_app/tritonserver/models/rectitude/1/labels.txt
#int8-calib-file=../../models/Primary_Detector/cal_trt.bin
batch-size=4
process-mode=1
model-color-format=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=6
interval=0
gie-unique-id=1
is-classifier=0
#parse-bbox-func-name=NvDsInferParseCustomResnet
#custom-lib-path=/path/to/this/directory/libnvds_infercustomparser.so
## 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
cluster-mode=2
output-blob-names=BatchedNMS
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=/opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream_tao_apps/post_processor/libnvds_infercustomparser_tao.so
#scaling-filter=0
#scaling-compute-hw=0

For 7.1, I am using ibnvds_infercustomparser_tao.so.
That is the only difference between 7.0 and 7.1.
But deepstream 7.0 with libnvds_infercustomparser_tlt.so has detection.
deepstream 7.1 with ibnvds_infercustomparser_tao.so has no detection.
I tried to build deepstream_tlt_appsin deepstream 7.1. I have error as

(gst-plugin-scanner:36386): GStreamer-WARNING **: 16:29:03.863: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.0: cannot open shared object file: No such file or directory
g++ -o libnvds_infercustomparser_tlt.so nvdsinfer_custombboxparser_tlt.cpp -I/opt/nvidia/deepstream/deepstream-7.1/sources/includes -I/usr/local/cuda-12.6/include -Wall -std=c++11 -shared -fPIC -Wl,--start-group -lnvinfer -lnvparsers -L/usr/local/cuda-12.6/lib64 -lcudart -lcublas -Wl,--end-group
/usr/bin/ld: cannot find -lnvparsers: No such file or directory
collect2: error: ld returned 1 exit status

Since the only difference is NvDsInferParseCustomBatchedNMSTLT in two librarys, you can copy the code of this function from DS7.0 to DS7.1.

That system is deployed at another place. Why I have errors to build post_processor in deepstream_tlt_apps in deepstream 7.1? If I can build at the new device, I don’t need to go there and copy.

Or you can remove -lnvparsers in makefile.

ok let me try in next week

Sorry for the late reply, Is this still an DeepStream issue to support? Thanks!