Jetson Nano convert tensorflow model to tensorrt

I want to convert my image segmentation model created in tensorflow on a windows machine to tensorrt so I can use it on my jetson nano.

I have tried to use the library:
from tensorflow.python.compiler.tensorrt import trt_convert as trt
on the nano but it’s not powerful enough because even if I leave it working for two days it’s not able to convert it

(to do this I have relied on the nvidia documentation)

On my windows machine I can’t convert it because the conversion library is not available on windows

So I am trying to install Ubuntu with Cuda, cudnn, Tensorrt and Tensorflow but I am having a lot of problems to install Tensorflow with Tensorrt support.

Is there any alternative method that I haven’t thought of?


Is it possible to convert the model into ONNX on Windows?
Usually, this can be done via tf2onnx tool.

If yes, ONNX model can be run with TensorRT binary directly.

$ /usr/src/tensorrt/bin/trtexec --onnx=[filename]


I could try,

So if i convert it to onnx the model would run as a “native” TensorRT model?



Yes, after the conversion, the model can work with TensorRT directly.
No TensorFlow library is required.


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.