Yolov4 with Deepstream 5.0 (Python sample apps), dlsym failed to get func NvDsInferParseCustomBatchedNMSTLT pointer

Hi !
I’m trying to run a Yolov4 model on my jetson nano (jetpack 4.5) but I have an error while running my model with python sample apps.
I followed the documentation to export the model (exported both in fp16, int8).
Here are the command I used :

!yolo_v4 export -m $USER_EXPERIMENT_DIR/data/kitti/final-test/yolo_v4/weights/yolov4_resnet18_epoch_012_pruned.tlt
-o $USER_EXPERIMENT_DIR/data/kitti/final-test/yolov4_resnet18_epoch_020_pruned_int8.etlt
-e $SPECS_DIR/yolo_v4_retrain_resnet18_kitti.txt
-k $KEY
–cal_image_dir $USER_EXPERIMENT_DIR/data/kitti/test/ground-truth/images
–data_type int8
–batch_size 1
–batches 10
–cal_cache_file $USER_EXPERIMENT_DIR/data/kitti/final-test/yolo_v4/weights/cal.bin
–cal_data_file $USER_EXPERIMENT_DIR/data/kitti/final-test/yolo_v4/weights/cal.tensorfile

Then converted the model using this command :

sudo ./tlt-converter -k tlt -d 3,512,512 -o BatchedNMS -e yolov4.engine -c cal.bin -m 1 -b 1 yolov4_resnet18_epoch_020_pruned_int8.etlt -w 500000000

And after that I tried to run it on one of the python sample app that I used for Faster RCNN. It is working with Faster RCNN but not with YOLO. It appears to be an error with a TRTOSS plugin.
Here is the error I have :

Creating Pipeline 

Creating Source 

Creating EGLSink 

Unknown or legacy key specified 'infer_dims' for group [property]
Unknown or legacy key specified 'clustor-mode' for group [property]
Adding elements to Pipeline 

Linking elements in the Pipeline 

Starting pipeline 


Using winsys: x11 
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
0:00:09.862765218  2282     0x31a03290 INFO                 nvinfer                             gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference>     NvDsInferContext[UID 1]: Info from     NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/engines/yolov4-new-int8.engine
INFO: [Implicit Engine Info]: layers num: 5
0   INPUT  kFLOAT Input           3x512x512       
1   OUTPUT kINT32 BatchedNMS      0               
2   OUTPUT kFLOAT BatchedNMS_1    200x4           
3   OUTPUT kFLOAT BatchedNMS_2    200             
4   OUTPUT kFLOAT BatchedNMS_3    200             

0:00:09.863084705  2282     0x31a03290 INFO                 nvinfer    gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/engines/yolov4-new-int8.engine
0:00:10.148438821  2282     0x31a03290 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initResource() <nvdsinfer_context_impl.cpp:683> [UID = 1]: Detect-postprocessor failed to init resource because dlsym failed to get func NvDsInferParseCustomBatchedNMSTLT pointer
ERROR: Infer Context failed to initialize post-processing resource, nvinfer error:NVDSINFER_CUSTOM_LIB_FAILED
ERROR: Infer Context prepare postprocessing resource failed., nvinfer error:NVDSINFER_CUSTOM_LIB_FAILED
0:00:10.195063451  2282     0x31a03290 WARN                 nvinfer gstnvinfer.cpp:809:gst_nvinfer_start:<primary-inference> error: Failed to create NvDsInferContext instance
0:00:10.195144547  2282     0x31a03290 WARN                 nvinfer gstnvinfer.cpp:809:gst_nvinfer_start:<primary-inference> error: Config file path: dstest2_pgie_config.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(809): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: dstest2_pgie_config.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED

Here is my folder /usr/lib/aarch64-linux-gnu

ll /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so*
lrwxrwxrwx 1 root root 28 Mai 23 11:32 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so → libnvinfer_plugin.so.7.0.0.1*
lrwxrwxrwx 1 root root 28 Mai 23 11:32 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7 → libnvinfer_plugin.so.7.0.0.1*
lrwxrwxrwx 1 root root 28 Mai 23 12:06 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.0.0 → libnvinfer_plugin.so.7.0.0.1*
-rwxr-xr-x 1 root root 3420160 Mai 23 11:23 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.0.0.1*
-rwxr-xr-x 1 root root 3420160 Mai 23 11:23 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.3*

And here are my scripts to run them.
deepstream_test_2.py (14.5 KB)
dstest2_pgie_config.txt (4.0 KB)
Do you have an idea how to solve this issue ? Thanks !

Please refer to the yolo_v4 deepstream config file YOLOv4 — Transfer Learning Toolkit 3.0 documentation
or deepstream_tlt_apps/pgie_yolov4_tlt_config.txt at master · NVIDIA-AI-IOT/deepstream_tlt_apps · GitHub
Your dstest2_pgie_config.txt for yolo_v4 is not correct.

I corrected the config file but the issue was still appearing. I finally solved it by changing the libnvds_infercustomparser.so by the libnvds_infercustomparser-tlt.so thas is inside the folder postprocessor of the Github repo deepstream_tlt_apps. Mine was out of date and hadn’t the custom parser for Yolov4.

Thanks for the help!