How to run tlt trained model in Jetson-inference lib?

I have TLT trained model for different versions.
trained.tlt, trained.trt and trained.engine

Which model can I use to run with Jetson-inference lib?
I used trained.engine and it runs but the outputs are not correct.
How can I run TLT trained model in Jetson-inference lib to run on AGX Xavier?

Hi @edit_or,
You can check the TLT deployment details on the link below wrt the model you are using
https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html#deepstream_deployment
Thanks!