I realize this question is probably asked before. However, I cannot find a conclusive answer anywhere. I would like some clarification on the capabilities of TF-TRT vs TensorRT.
I have a tflite model of a Mobilenet model. How can I run it on the Jetson Nano?
I don’t know if I can convert tflite to uff. Is it possible?
I don’t know if I can run this using TF-TRT either. Is this going to be slower than TensorRT?
The stackoverflow link suggests using TOCO to convert .tflite to .pb, but the consensus appears to be that TOCO no longer supports this feature (an error message is encountered).
Is there another way to convert .tflite to .pb to .uff to TensorRT engine?
I am trying to employ Deepstream 5.0 SDK to accelerate the interence time of a .tflite model on a Jetson Xavier NX. Thank you.