There was an error converting etlt to engine in LPR

Please provide the following information when requesting support.

• Hardware (Xavier Jetpack4.4)
• Network Type (LPRnet)
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)


After I retrained the license plate data in ‘lprnet.ipynb’, I got ‘lprnet_ epoch-24.etlt’ file, but when I convert the file to engine file with ‘tao-converter’ on Jetson, the following error occurs:

[WARNING] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
terminate called after throwing an instance of 'std::out_of_range'
  what():  Attribute not found: axes
Aborted (core dumped)

The command I use:

./tao-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96 lprnet_epoch-24.etlt -t fp16 -e lpr_ch_onnx_b16_our.engine 

Did you download the correct version of tao-converter?

I use the correct “tao-converter”(for jetpack4.4)

Can you share the full log?

Please

  • update Jetpack to 4.6
  • or use TAO 3.21.08 to generate etlt model again based on your tlt model.
$ docker pull nvcr.io/nvidia/tao/tao-toolkit-tf:v3.21.08-py3
$ tao lprnet
# lprnet export xxx

Is this error solved?

@yahiya6006
As mentioned above, please use one of below solutions.
(1) update Jetpack to 4.6

or

(2) use TAO 3.21.08 to generate etlt model again based on your tlt model.

$ docker pull nvcr.io/nvidia/tao/tao-toolkit-tf:v3.21.08-py3
$ docker run --runtime=nvidia -it --rm -v yourfolder:dockerfolder  nvcr.io/nvidia/tao/tao-toolkit-tf:v3.21.08-py3  /bin/bash
# lprnet export xxx

“lprnet export xxx” can export lprnet.engine file on x86 server.But I can’t use ‘tao-converter’ to export lprnet.engine file on Jetson device.

Make sure you download the correct version of tao-converter for your Jetson device.

I confirm that I download the correct version of the tao-converter (for Jetpack 4.4) on my Jetson device.

Please share the full command and full log when you run ‘tao-converter’ in Jetson device.

tao-converter (118.1 KB)
lprnet_epoch-24.etlt (55.0 MB)
The “tao-converter” and my etlt file “lprnet_epoch-24.etlt” has upload.
The comand is:

./tao-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96 lprnet_epoch-24.etlt -t fp16 -e lprnet.engine

The error log is:

nvidia@xavier:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/yolov5-deepstream-lpr/models/LP/LPR$ ./tao-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96 lprnet_epoch-24.etlt -t fp16 -e lprnet.engine
[WARNING] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
terminate called after throwing an instance of 'std::out_of_range'
  what():  Attribute not found: axes
Aborted (core dumped)

Hi @jiajingong
May I know if you generate the etlt model with below 3.21.08 docker?
I modify my command above to make sure use the 3.21.08 docker.

$ docker pull nvcr.io/nvidia/tao/tao-toolkit-tf:v3.21.08-py3
$ docker run --runtime=nvidia -it --rm -v yourfolder:dockerfolder  nvcr.io/nvidia/tao/tao-toolkit-tf:v3.21.08-py3  /bin/bash
# lprnet export xxx