How preform inference retinanet using a TLT export .engine file by python

I use TLT V2.0 training retinanet resnet 50 model, and export .engine file.
I don’t need deepstream .
How preform inference by python?
I only search SSD model inference sample and very old sample.

Officially, TLT provides tlt-infer for inference. They can run inference against tlt model.
Some detection networks also provide the command how to run inference against trt engine. Such as , detectnet_v2, faster_rcnn, etc. See tlt user guide or jupyter notebooks for more details.

If end user wants to run inference against trt engine without tlt-infer or deepstream, they need to write their own codes.

Reference topics:
For classification network,

For detectnet_v2 network,

Reference post-processing:

How run tlt-infer in the jetson nx ?

Reference: Deploying to Deepstream — Transfer Learning Toolkit 2.0 documentation

Please copy etlt model into Jetson NX, then config the file with deepstream and then run inference with deepstream.

Or
Copy etlt model into Jetson NX, then use tlt-converter to generate trt engine, config the file with deepstream, and then run inference with deepstream.