LPRNet trained with taotoolkit 4.0.0 does not compile with deepstream 6.1.1

Please provide the following information when requesting support.

• Hardware: any
• Network Type: LPRNet
• Tao Version: 4.0.0, format version: 2.0
• Training spec file:

[property]
gpu-id=0
force-implicit-batch-dim=0
tlt-encoded-model=/lpr.etlt
model-engine-file=/lpr.etlt_b32_gpu0_fp16.engine
labelfile-path=/lpr_labels.txt
tlt-model-key=nvidia_tlt
batch-size=32
scaling-compute-hw=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=3
gie-unique-id=3
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid
#0=Detection 1=Classifier 2=Segmentation
network-type=1
parse-classifier-func-name=NvDsInferParseCustomNVPlate
custom-lib-path=/opt/nvidia/deepstream/deepstream-6.1/sources/libs/anpr_layer/libnvdsinfer_custom_impl_lpr.so
process-mode=2
operate-on-gie-id=2
net-scale-factor=0.00392156862745098
#net-scale-factor=1.0
#0=RGB 1=BGR 2=GRAY
model-color-format=0

[class-attrs-all]
threshold=0.5

Deepstream version: 6.1.1

• How to reproduce the issue? (This is for errors. Please share the command line and the detailed log here.)

I want to train and deploy the LPRNet model with Deepstream. There weren’t problems with the previous version of TaoToolkit (I think 3.0.0). The steps were as follows:

  1. Train the LPRNet model
  2. Export etlt model
  3. Put raw etlt model to deepstream models directory
  4. Compile model to .engine without problems

However, with tao 4.0.0 I can’t compile the model directly inside the deepstream app. Below are logs from compiling the etlt model by deepstream

0:00:00.738115874     7      0x45e3c60 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<secondary-inference-1> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 3]: Trying to create engine from model files
parseModel: Failed to parse ONNX model
ERROR: tlt/tlt_decode.cpp:389 Failed to build network, error in model parsing.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:723 Failed to create network using custom network creation function
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:789 Failed to get cuda engine from custom library API
va-engine    | 0:00:01.110047707     7      0x45e3c60 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<secondary-inference-1> NvDsInferContext[UID 3]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 3]: build engine file failed
 7 Segmentation fault

I’ve also tried to compile the etlt model by tao toolkit to the .engine file and test. The logs are:

ERROR: [TRT]: 1: [stdArchiveReader.cpp::StdArchiveReader::40] Error Code 1: Serialization (Serialization assertion stdVersionRead == serializationVersion failed.Version tag does not match. Note: Current Version: 205, Serialized Engine Version: 232)
ERROR: [TRT]: 4: [runtime.cpp::deserializeCudaEngine::50] Error Code 4: Internal Error (Engine deserialization failed.)
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1528 Deserialize engine failed from file: /lpr.etlt_b32_gpu0_fp16.engine
0:00:00.767097152     7      0x410e660 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<secondary-inference-1> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1888> [UID = 3]: deserialize engine from file :/lpr.etlt_b32_gpu0_fp16.engine failed
0:00:00.786249154     7      0x410e660 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<secondary-inference-1> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1993> [UID = 3]: deserialize backend context from engine from file :/lpr.etlt_b32_gpu0_fp16.engine failed, try rebuild
0:00:00.786266988     7      0x410e660 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<secondary-inference-1> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 3]: Trying to create engine from model files
parseModel: Failed to parse ONNX model
ERROR: tlt/tlt_decode.cpp:389 Failed to build network, error in model parsing.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:723 Failed to create network using custom network creation function
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:789 Failed to get cuda engine from custom library API
 0:00:01.143307313     7      0x410e660 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<secondary-inference-1> NvDsInferContext[UID 3]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 3]: build engine file failed
./start.sh: line 15:     7 Segmentation fault      (core dumped) python3 run.py

I’ve attached custom LPR parser (download from github repo)
nvinfer_custom_lpr_parser.cpp (4.6 KB)

Question:

  1. Is it possible to train LPRNet model with taotoolkit 4.0.0, and export etlt model to deepstream, which compiles it automatically? - I’ve done this successfully with previously trained models also based on tao toolkit
  2. If not, which version of tao toolkit should I use to export and compile models automatically?

I believe it is possible because it works with one of the previous versions of taotoolkit LPRNet trained model but I don’t know which version I should use - I didn’t find previous versions on the pip repository.

Update: When I tried to export the old tlt model trained in march 2022 with taotoolkit 4.0.0 I’m not be able to compile it with deepstream 6.1.1 but when I use compiled etlt model from previous training (march 2022) it works as expected.

So my questions are the same:

  1. Is it possible to train LPRNet model with taotoolkit 4.0.0, and export etlt model to deepstream, which compiles it automatically? - I’ve done this successfully with previously trained models also based on tao toolkit
  2. If not, which version of tao toolkit should I use to export and compile models automatically?
  3. Where to find taotoolkit 3.0.0?

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

See LPRNet — TAO Toolkit 4.0 documentation. LPRNet .etlt cannot be parsed by DeepStream directly. You should use tao-converter to convert the .etlt model to optimized TensorRT engine and then integrate the engine into DeepStream pipeline.
You can find tao 3.0 in TAO Toolkit for Computer Vision | NVIDIA NGC . Then trigger the container via $ docker run --runtime=nvidia -it --rm --entrypoint=“” nvcr.io/nvidia/tao/tao-toolkit-tf:v3.21.08-py3 /bin/bash

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.