Running a Tensorflow 1 model on TensorRT

I have been training an AI in Tensorflow 1.15.2 for a couple years now for rail purposes but this was done before I came onto the project and there have been large advancements since then. I was wondering how easy it would be to convert the Tensorflow 1 model so that it can be optimised and ran using TensorRT.

I have been using Dusty-NV’s jetson-inference code for object detection for another project and I know that it can run standard pre-trained models fairly well on the Nano’s hardware, and I am seeing comprable performance for the Tensorflow 1 model but running on a Jetson Xaiver NX.

By optimising the model for TensorRT applications to leverage more of the GPU performance, I think that it can run better on the Xavier without having to train a TensorRT model from the ground up.

So, my question is, how does one go about converting a Tensorflow 1 model so that it can be used within TensorRT?


For TensorFlow 1.15 model, you can try the TF->UFF->TensorRT workflow.
Below is a sample of the SSD model for your reference: