Please provide the following information when requesting support.
• Hardware (Xavier Jetpack4.4)
• Network Type (LPRnet)
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)
▲
After I retrained the license plate data in ‘lprnet.ipynb’, I got ‘lprnet_ epoch-24.etlt’ file, but when I convert the file to engine file with ‘tao-converter’ on Jetson, the following error occurs:
[WARNING] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
terminate called after throwing an instance of 'std::out_of_range'
what(): Attribute not found: axes
Aborted (core dumped)
nvidia@xavier:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/yolov5-deepstream-lpr/models/LP/LPR$ ./tao-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96 lprnet_epoch-24.etlt -t fp16 -e lprnet.engine
[WARNING] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
terminate called after throwing an instance of 'std::out_of_range'
what(): Attribute not found: axes
Aborted (core dumped)