Running custom trained models on Jetson Nano


I have been thinking about this thing for a while. I was able to run the installed models in jetson inference for image segmentation and object detection using commands like jetson.inference.detectNet() and jetson.inference.segNet. As a next step, I would like to do some kind of transfer learning on the above tasks to develop a model in TensorFlow or PyTorch suitable for my use case. I know that the models must be converted to onnx for use with TensorRT. My question is, after the conversion, how do I go ahead and use the model for inference? Are the inference commands provided above suitable to do the task or I need to use the TensorRT framework? I would really appreciate it if you could point me towards any resources I could look into.

Thank you