Jetson Nano - Converting TensorFlow .pb model to TensorRT or TF-Lite model?

I managed to train an Object Detection Model on the Jetson Nano, using this guide:

I installed the official TensorFlow for the Nano.

After training it to 50k steps, I have a model that is pretty well trained. However, I want to run this model on the Nano, and I plan on running this on TensorRT so I use the optimized framework for the Nano. However, there doesn’t seem to be a clear cut guide on how to do the conversion.

Any help will be very much appreciated.


You can follow the steps shared in this GitHub:


Okay so, if I got it right, I’m supposed to train one of the supported models (ssd_inception_v2_coco_2017_11_17, ssd_mobilenet_v1_coco, and ssd_mobilenet_v2_coco), and then modify the graphsurgeon converter file, and then do everything under “RUN” to convert my newly trained supported model, to convert the pb tensorflow model?

I tried this on a stock ssd mobilenet v2 coco model, and it gave out a result.jpg, tmp.uff, and TRT_ssd_mobilenet_coco_2018_03_29.bin

is the last file the file I’m looking for?



You can create a TensorRT engine like this:

# create engine
with open(model.TRTbin, 'rb') as f:
    buf =
    engine = runtime.deserialize_cuda_engine(buf)


Sorry for the bump, but I tried doing this again. I was going through the installing of dependencies as indicated here:

But when I ran that, I got a 404 Not Found Error with this url:

Any remedies for this one?