TensrFlow2.0 to run on Jetson Nano 2GB

Hello Experts,

I have a Keras model (saved_model.pb) built with TensorFlow 2.0 which I want to run on Jetson Nano. I was assuming I need to convert this model to TensorRT model first? Or do I need to convert it to ONNX and then convert it to .engine which the Nano could readily read and perform infer? Can anybody show me the right path and right tools to achieve this? would appreciate if somebody could provide me some links/examples (preferably in python) as well.



Please follow this workflow: Keras → ONNX → TensorRT

1. First, please convert the TensorFlow model into ONNX with tf2onnx.

2. Then you can create a TensorRT plan and inference it with trtexec app.

$ /usr/src/tensorrt/bin/trtexec --onnx=[your/model]


Thank you for your reply. I made the onnx and then when I ran trtexec the following error I got

[03/08/2021-11:14:57] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[03/08/2021-11:14:58] [E] [TRT] Network has dynamic or shape inputs, but no optimization profile has been defined.
[03/08/2021-11:14:58] [E] [TRT] Network validation failed.
[03/08/2021-11:14:58] [E] Engine creation failed
[03/08/2021-11:14:58] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # ./trtexec --onnx=model.onnx


This error indicates that your ONNX model doesn’t have a pre-defined batchsize value.
And no corresponding value is given at run-time.

To use default shape, please try the following:

/usr/src/tensorrt/bin/trtexec --onnx=[your/model] --shapes=1