LPRNet custom trained model ERROR: [TRT]: UffParser: Could not read buffer

My Device Config is like this

NVIDIA Jetson AGX Xavier [16GB]
 L4T 32.5.1 [ JetPack 4.5.1 ]
   Ubuntu 18.04.5 LTS
   Kernel Version: 4.9.201-tegra
 CUDA 10.2.89
   CUDA Architecture: 7.2
 OpenCV version: 4.1.1
   OpenCV Cuda: NO
 CUDNN: 8.0.0.180
 TensorRT: 7.1.3.0
 Vision Works: 1.6.0.501
 VPI: ii libnvvpi1 1.0.15 arm64 NVIDIA Vision Programming Interface library
 Vulcan: 1.2.70

I have trained a lprnet model using TLT 3 on a Docker. I have exported the model properly (model key as nvidia_tlt, FP32 precision) following the lprnet sample notebook.

When I try to use in as sgie_1 in DeepStream App, I face the following error.

0:00:00.371278013  5964   0x557f1bd060 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 5]: Trying to create engine from model files
ERROR: [TRT]: UffParser: Could not read buffer.
parseModel: Failed to parse UFF model
ERROR: failed to build network since parsing model errors.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:00.920055257  5964   0x557f1bd060 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1736> [UID = 5]: build engine file failed

My SGIE_1 config file is like the following:

[property]
gpu-id=0
### US LPR #####
#model-engine-file=../models/LP/LPR/lpr_us_onnx_b16.engine
labelfile-path=../models/LP/LPR/labels_us.txt
tlt-encoded-model=/home/nvidia/Documents/lprnet_epoch-latestt-fp32-100.etlt
tlt-model-key=nvidia_tlt
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=3
gie-unique-id=3
uff-input-blob-name=Input
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid
#0=Detection 1=Classifier 2=Segmentation
network-type=1
parse-classifier-func-name=NvDsInferParseCustomNVPlate
custom-lib-path=/home/nvidia/Documents/deepstream-5.1/sources/apps/sample_apps/deepstream_lpr_app/nvinfer_custom_lpr_parser/libnvdsinfer_custom_impl_lpr.so
process_mode=2
operate-on-gie-id=2
net-scale-factor=0.00392156862745098
#net-scale-factor=1.0
#0=RGB 1=BGR 2=GRAY
model-color-format=0

[class-attrs-all]
threshold=0.5

When I enable the US LPR ONNX model as described in the tutorial , it works fine. I am totally confused as I checked the model key and other parameters but no luck. I have also tried with the exported .engine file from the .etlt file trained on TLT 3 but it gives a different error:

<nvdsinfer_context_impl.cpp:1716> [UID = 5]: Trying to create engine from model files
ERROR: failed to build network since there is no model file matched.
ERROR: failed to build network.
0:00:00.676098384  6860   0x55b8f3c860 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1736> [UID = 5]: build engine file failed
0:00:00.676202581  6860   0x55b8f3c860 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1822> [UID = 5]: build backend context failed
0:00:00.676231478  6860   0x55b8f3c860 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1149> [UID = 5]: generate backend failed, check config file settings

Please help!

Please convert the encrypted LPR ONNX model to a TAO Toolkit engine.
Suggest you trying according to GitHub - NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream and deepstream_lpr_app/lpr_config_sgie_us.txt at master · NVIDIA-AI-IOT/deepstream_lpr_app · GitHub

@Morganh I had trained a custom lprnet model, exported it as .etlt and when running it on Deepstream, I am facing the error as mentioned. Why should I convert the ONNX model ? The US LPR ONNX model works fine.

See LPRNet — TAO Toolkit 3.0 documentation

LPRNet .etlt cannot be parsed by DeepStream directly. You should use tao-converter to convert the .etlt model to optimized TensorRT engine and then integrate the engine into DeepStream pipeline.