How to convert Tensorflow model to Tensorrt?

I train tensorflow object detection model using Google colab . now i want to run that model on Jetson nano . how can i convert that model to tensorrt so i can run model on jetson nano. i am using SSD RESNET-50 model.

Hi,

Please follow the instruction shared in the below sample:

/usr/src/tensorrt/samples/sampleUffSSD

Thanks.

Can you guide me with clear instruction . I go to that location but not able to see what need to done clearly as no instruction are mentioned. second i try to using tf-trt model but it is giving error while converting that!! i just open readme.md . i genreate my .pb file using exporter_main_v2.py file and i have tensorflow 2 so will this work with this?

Hi,

Please check the README file in the folder.

Thanks.

I try to follow that one but i am not seeing any clear instruction to just convert TensorFlow model to tensor rt model . as i try to run tensor flow model i am getting insufficient memory problem with that. can you give some way to run model on jetson nano. as i can run model on image ( that also require lot’s of time to run) but if same i run using camera it will get memory problem.Please help me . very bad documentation and less amount of tutorial i can able to see on jetson nano. very bad think to return this board

Hi,

May I know do you use Nano 4GB or Nano 2GB?
Since Nano has limited resources, it can not inference a complicated model that requires too much memory.

We are sorry that it seems to inference a TensorFlow model with TensorRT is tricky and non-friendly.
That’s because TensorRT doesn’t directly support the TensorFlow model but requires some intermediate format.

So if you can convert the model into ONNX format,
you should be able to run it with trtexec to get some performance score.

To convert a TensorFlow into ONNX, you can try the tf2onnx library.
Please remember to generate the model with a fixed input size, or you will need to handle some dynamic shape issue.

After that, you can run the onnx model with our built-in binary trtexec.

/usr/src/tensorrt/bin/trtexec --onnx=[your/file]

Or you can check the below topic for a python script to inference with TensorRT(ONNX input):

Thanks.

I am using 4 Gb jetson nano. i try to convert my model using onnx format and then i try running same with onnx runtime so when i check output directory of onnx model it is totally diffrent compare to tensorflow orignal model output. so that means it is giving wrong output with onnx format. Any idea why this happen. it menas conversion to tensorflow to onnx is not right ?

ex like output directory has like boxes,score and other element where onnx model output has like diffrent name like identity_0 and so on.

Hi,

You can visualize an ONNX model through this website:
https://netron.app/

It is possible the naming changing in ONNX since this depends on the parser owner’s implementation.
But if you can get the same data output (with corresponding layer name), you can still feed it into the TensorRT.

Thanks.