Is there a guide/pre-build .a to support tensorflow lite inference on Jetson AGX? The official tensorflow documentation is for ARM v7(RPI)
It’s recommended to use our TensorRT as an inference engine.
We have optimized the engine for Jetson architecture.
May I know why you want to use TensorFlow Lite rather TensorRT?
I want to benchmark the performance of TensorRT vs some of the native TF Lite optimizations such as quantization etc. to see the tradeoff of where one performs better than the other, vice-versa.
We don’t build TensorFlow Lite for Jetson.
You will need to build it from source for your usage.