Recommended Custom Bounding Box Parser not working for Yolov4

I’m using TLT3.0 on my Jetson Nano on Jetpack 4.5.1 trying to get my TLT trained Yolov4 model to run in deepstream-app.

The issue I’m running into is that bounding boxes don’t show up in the video for my yolov4 model using deepstream-app.

In the recommendations here, it says to build libnvds_infercustomparser_yolov3_tlt.so which I did.
It then says to put:

parse-bbox-func-name=NvDsInferParseCustomYOLOV3TLT
custom-lib-path=<Path to libnvds_infercustomparser_yolov3_tlt.so>

in your primary gie config file, which I did.
This resulted in me getting no bounding box detections in my video. I then spent so much time altering different parameters in the main config and primary gie config files but nothing was working.

Eventually I found out this repo uses a different parser library than the in the TLT documentation for their Yolov4 config. So, I compiled theirs and then updated my config file with:

parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=/home/nvidia/Desktop/deepstream_tlt_apps/post_processor/libnvds_infercustomparser_tlt.so

and now it works! I wanted to let your team know so they can test it and update the docs if needed, or if anybody else is struggling with bounding box issues not generating for their yolov4 model.

1 Like

Thanks for the catching! Yes, please follow deepstream_tlt_apps/configs/yolov4_tlt at master · NVIDIA-AI-IOT/deepstream_tlt_apps · GitHub
TLT team will update the tlt 3.0 user guide.

1 Like

@Morganh No worries. Another error I caught is that in your Creating a Configuration File section, you list
batch_size: 8
under eval_config
Yet further down in the Evaluation Config section, there is no field about batch_size.
So is batch_size ignored in that section or is it a valid parameter there?

The batch_size is a valid parameter.

1 Like