What is the procedure to convert tensorflow model to TensorRT for running inference on jetson nano

Hi,
I want to run Resnet50 (pretrained) inference on jetson nano USING TENSORRT.
Can anyone guide me the step by step procedure to convert the model to tensorrt?
DO I have to do the conversion on CPU and then port the converted tensorrt model on jetson? or do I have to do the conversion on jetson itself?
How can I do it using the NGC containers if it has to be done on jetson? Which container do i need to use?

Thanks in advance

Hi,

Please convert the model into ONNX format with tf2onnx first.
After that, you can deploy the model with TensorRT via the following command directly.

$ /usr/src/tensorrt/bin/trtexec --onnx=[model]

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.