Jetson Nano - Converting TensorFlow .pb model to TensorRT or TF-Lite model?

I managed to train an Object Detection Model on the Jetson Nano, using this guide:

https://medium.com/object-detection-using-tensorflow-and-coco-pre/object-detection-using-tensorflow-and-coco-pre-trained-models-5d8386019a8

I installed the official TensorFlow for the Nano.

After training it to 50k steps, I have a model that is pretty well trained. However, I want to run this model on the Nano, and I plan on running this on TensorRT so I use the optimized framework for the Nano. However, there doesn’t seem to be a clear cut guide on how to do the conversion.

Any help will be very much appreciated.

Hi,

You can follow the steps shared in this GitHub:
https://github.com/AastaNV/TRT_object_detection

Thanks.

Okay so, if I got it right, I’m supposed to train one of the supported models (ssd_inception_v2_coco_2017_11_17, ssd_mobilenet_v1_coco, and ssd_mobilenet_v2_coco), and then modify the graphsurgeon converter node_manipulation.py file, and then do everything under “RUN” to convert my newly trained supported model, to convert the pb tensorflow model?

I tried this on a stock ssd mobilenet v2 coco model, and it gave out a result.jpg, tmp.uff, and TRT_ssd_mobilenet_coco_2018_03_29.bin

is the last file the file I’m looking for?

Hi,

YES.

You can create a TensorRT engine like this:

# create engine
with open(model.TRTbin, 'rb') as f:
    buf = f.read()
    engine = runtime.deserialize_cuda_engine(buf)

Thanks.

Sorry for the bump, but I tried doing this again. I was going through the installing of dependencies as indicated here: https://github.com/AastaNV/TRT_object_detection#install-dependencies

But when I ran that, I got a 404 Not Found Error with this url:
https://developer.download.nvidia.com/compute/redist/jp/v42/tensorboard

Any remedies for this one?