pytorch vs TensorFlow for TensorRT

I’m planning to use the Jetson Nano for speech recognition tasks and I want the fastest response as possible.

I’m comfortable using Pytorch, so I thought of converting a custom trained model to TensorRT using ONNX. But I learned that TensorFlow had TensorFlowLite and TensorFlow has TensorRT integrated. Which made me reconsider using Pytorch.

I learned TensorFlow when I first learned deep learning, so I think it wouldn’t be that hard to use TensorFlow.

Would TensorFlow be faster and would using TensorFlowLite be practical on a Jetson Nano?

Hi,

It’s recommended to use pure TensorRT rather than TensorFlow-lite or TF-TRT.
We have optimized the TensorRT based on the Jetson platform so it have better performance and use less memory.

There are some related TensorRT samples for your reference:
ONNX: /usr/src/tensorrt/samples/sampleOnnxMNIST/
RNN(TF): /usr/src/tensorrt/samples/sampleCharRNN

Thanks.

1 Like