Where are sample codes to use exported *.tlt in c++

For .tlt model , you run run inference with tao inference inside the docker.
You can also export .tlt model to .etlt model, then deploy the .etlt model with deepstream.

More, you can also generate trt engine for inference. For LPRnet, just like the link you mention above, run the application to do inference.

For standalone inference with LPRnet trt engine, refer to Python run LPRNet with TensorRT show pycuda._driver.MemoryError: cuMemHostAlloc failed: out of memory - #8 by Morganh or Not Getting Correct output while running inference using TensorRT on LPRnet fp16 Model