Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Orin 64G development kit • DeepStream Version 7.1 • JetPack Version (valid for Jetson only) 6.1
I have trained model in TAO and had good detection tested with test images (10% of dataset). Detection is good with new images as well.
But after conversion to onnx and engine file. No detection at all tested with deepstream-app.
The config files and model are attached. rectitude_config_infer_primary.txt (4.0 KB) rectitude_main.txt (7.2 KB)
What could be wrong?
How did you test the model before using DeepStream? please make sure the preprocessing parameters and postprocessing are correct. please refer to this faq Debug Tips for DeepStream Accuracy Issue. For simplicity, please use deeptream-test1 to debug first.
I have similar training for yolo4 using TAO lib and run on deepstream version 7.0
It works. I can see all detection.
The configuration I used, psl see below.
I used this library libnvds_infercustomparser_tlt.so for version 7.0
For 7.1, I am using ibnvds_infercustomparser_tao.so.
That is the only difference between 7.0 and 7.1.
But deepstream 7.0 with libnvds_infercustomparser_tlt.so has detection.
deepstream 7.1 with ibnvds_infercustomparser_tao.so has no detection.
I tried to build deepstream_tlt_appsin deepstream 7.1. I have error as
(gst-plugin-scanner:36386): GStreamer-WARNING **: 16:29:03.863: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.0: cannot open shared object file: No such file or directory
g++ -o libnvds_infercustomparser_tlt.so nvdsinfer_custombboxparser_tlt.cpp -I/opt/nvidia/deepstream/deepstream-7.1/sources/includes -I/usr/local/cuda-12.6/include -Wall -std=c++11 -shared -fPIC -Wl,--start-group -lnvinfer -lnvparsers -L/usr/local/cuda-12.6/lib64 -lcudart -lcublas -Wl,--end-group
/usr/bin/ld: cannot find -lnvparsers: No such file or directory
collect2: error: ld returned 1 exit status
That system is deployed at another place. Why I have errors to build post_processor in deepstream_tlt_apps in deepstream 7.1? If I can build at the new device, I don’t need to go there and copy.