Jetson Xavier NX tensorRT compilation error

Currently trying to compile the off the shelf LPRnet on Xavier NX board. Downloaded from NGC the us_lprnet_baseline18_deployable.onnx.

When running the below command.

docker container run --rm --net=host --runtime nvidia -v ~/models:/models nvcr.io/nvidia/l4t-tensorrt:r8.0.1-runtime /usr/src/tensorrt/bin/trtexec --onnx=/models/us_lprnet_baseline18_deployable.onnx --fp16 --saveEngine=/models/plate_recognition_fp16.engine

The following error is output.

[10/29/2025-18:16:24] [E] Error[1]: [codeGenerator.cpp::compileGraph::476] Error Code 1: Myelin (Cublas Error: CublasLt, Op desc creation failed)

[E] Error[2]: [builder.cpp::buildSerializedNetwork::417] Error Code 2: Internal Error (Assertion enginePtr != nullptr failed.)

The error is not produced when running the other tao onnx files in a similar fashion.

Jetpack=4.6.2
Revision 7.2

Hi Stefan,

I have moved your post to the Jetson Xavier forum for better visibility.

Thanks,

AHarpster

Thanks for the mobility!

I’ll add some additional context. I don’t really care which tao/tlt I train this in to get it moved over. In reading and exploring it appears I’m running into a memory error sometimes and lprnet happens to be a bit particular about which versions it operates in.

If I’m targeting jetpack 4.6.2 where is the most stable LPRnet training path I can find to replace the model?

from my understanding I can get the LPRnet baseline train it with some additional data. which ideally outputs a tlt. I haven’t seen or confirmed that hdf5 will work for LPRnet. From there tlt → etlt via tao export on the training machine. Transfer the etlt to the Xavier NX run the tao converter which should produce an engine.

jetpack 4.6.2 can’t really change from my understanding of how the rest of the application runs, but I can try to align whatever etlt I put in their to replace the baseline.etlt being used.

I’ve utilized TAO 3.0 or tlt to build an etlt but the export fails reporting an image_mean error.
and TAO 5.5 when playing with the 5.5 onnx files I had tried just direct engine builds which resulted in the above error. changing workspace size flags or input dimensions doesn’t change anything I suspect its just the fundamental mismatch.

Happy for any feedback or direction to try to track this down, and if there are other tao packages needed; deploy, converter, etc