LPRnet ERROR: [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: output_bbox/BiasAdd

NVIDIA Jetson AGX Xavier [16GB]
L4T 32.4.3 [ JetPack 4.4 ]
Ubuntu 18.04.4 LTS
Kernel Version: 4.9.140-tegra
CUDA 10.2.89
CUDA Architecture: 7.2
OpenCV version: 4.1.1
OpenCV Cuda: NO
CUDNN: 8.0.0.180
TensorRT: 7.1.3.0
Vision Works: 1.6.0.501
VPI: 0.4.4
Vulcan: 1.2.70

I have trained a custom LPRnet model with TAO. I had generated the engine with TAO converter for running on DLA1.

./tao-converter lprnet_epoch-25-octane.etlt -k nvidia_tlt -b 8 -p image_input,1x3x48x96,4x3x48x96,16x3x48x96 -t int8 -u 1 -e LPRnet_25_fp32_octane_gpu0.engine

Although the model runs fine on my setup, it gives the following error during initialization.

0:00:03.176761835  4346   0x55c24ed530 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 5]: deserialized trt engine from :/home/nvidia/deepstream-app/deepstream_lpr_app/models/LP/LPR/LPRnet_25_int8_octane_dla1.engine
ERROR: [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: output_bbox/BiasAdd
0:00:03.176971124  4346   0x55c24ed530 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1669> [UID = 5]: Could not find output layer 'output_bbox/BiasAdd' in engine
ERROR: [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: output_cov/Sigmoid
0:00:03.176996725  4346   0x55c24ed530 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1669> [UID = 5]: Could not find output layer 'output_cov/Sigmoid' in engine
0:00:03.177035319  4346   0x55c24ed530 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 5]: Use deserialized engine model: /home/nvidia/deepstream-app/deepstream_lpr_app/models/LP/LPR/LPRnet_25_int8_octane_dla1.engine
0:00:03.182477835  4346   0x55c24ed530 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary_gie_1> [UID 5]: Load new model:/home/nvidia/deepstream-app/./deepstream_lpr_app/deepstream-lpr-app/lpr_config_sgie_us.txt sucessfully

My lprnet config file is bellow:

[property]
gpu-id=0
labelfile-path=../models/LP/LPR/labels_us.txt
model-engine-file=../models/LP/LPR/LPRnet_25_int8_octane_dla1.engine
tlt-model-key=nvidia_tlt
batch-size=8
enable-dla=1
use-dla-core=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
num-detected-classes=3
gie-unique-id=5
uff-input-blob-name=image_input
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid
#0=Detection 1=Classifier 2=Segmentation
network-type=1
parse-classifier-func-name=NvDsInferParseCustomNVPlate
custom-lib-path=../nvinfer_custom_lpr_parser/libnvdsinfer_custom_impl_lpr.so
process_mode=2
operate-on-gie-id=4
net-scale-factor=0.00392156862745098
#net-scale-factor=1.0
#0=RGB 1=BGR 2=GRAY
model-color-format=0

classifier-threshold=0.98

[class-attrs-all]
#threshold=0.999

Hi,

Would you mind double-checking the output name first?
LPRnet is a classifier but it seems you are using a detector output.

Thanks.