Where are sample codes to use exported *.tlt in c++

Please provide the following information when requesting support.

• Hardware:Nano
• Network Type:LPRnet
• TLT Version : tao 3.0
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

After studying tao examples, I can obtain some weight files *.tlt. But I still have no idea to use these weights files to infer. How to use the *.tlt files. Are there some cpp examples to use the weight files?

For example, the command exports a model file for lprnet.

tao lprnet export -m /workspace/lprnet/weights/lprnet_epoch-24.tlt -k nvidia_tlt -e /workspace/lprnet_spec.txt

Where can I find sample codes in c++ to use the exported weights to infer?

The above sample code includes deepstream with video input/output. It’s too complex. Are there any c++ sample code with just a image and a corresponding output.

For .tlt model , you run run inference with tao inference inside the docker.
You can also export .tlt model to .etlt model, then deploy the .etlt model with deepstream.

More, you can also generate trt engine for inference. For LPRnet, just like the link you mention above, run the application to do inference.

For standalone inference with LPRnet trt engine, refer to Python run LPRNet with TensorRT show pycuda._driver.MemoryError: cuMemHostAlloc failed: out of memory - #8 by Morganh or Not Getting Correct output while running inference using TensorRT on LPRnet fp16 Model

@Morganh

Is the python code the only method for standalone inference with LPRnet trt engine?
Because we want to integrate LPRnet with our existing c++ codes in Jetson nano, the sample codes in c++ are prefered.

No, c++ can be the method for sure. You can leverage GitHub - NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream</title .

Thank you. I’ll study the deepstream code to find out how to infer one single picture rather than a video stream.