How to run tflite model on Jetson nano

I realize this question is probably asked before. However, I cannot find a conclusive answer anywhere. I would like some clarification on the capabilities of TF-TRT vs TensorRT.

I have a tflite model of a Mobilenet model. How can I run it on the Jetson Nano?

  • I don’t know if I can convert tflite to uff. Is it possible?
  • I don’t know if I can run this using TF-TRT either. Is this going to be slower than TensorRT?

What should I do in this case?

Hi,

1. Suppose yes.
Our uff converter takes frozen .pb file as input.
So please convert your tflite model into .pb format first.
Here is a related information for your reference:
https://stackoverflow.com/questions/53664279/converting-tflite-to-pb

After that, you can follow our sample to convert .pb → uff → TensorRT engine.

/usr/src/tensorrt/samples/sampleUffSSD/

2. AFAIK, there are some issue to load tflite model with TF-TRT.

We recommend to use pure TensorRT since it will give you a better performance.

Thanks.

  1. Regarding: convert .tflite to .pb
  • The stackoverflow link suggests using TOCO to convert .tflite to .pb, but the consensus appears to be that TOCO no longer supports this feature (an error message is encountered).
  • Is there another way to convert .tflite to .pb to .uff to TensorRT engine?

I am trying to employ Deepstream 5.0 SDK to accelerate the interence time of a .tflite model on a Jetson Xavier NX. Thank you.

Hi swchew5649 ,

Please open DeepStream SDK related issue in to DeepStream SDK forum: DeepStream SDK - NVIDIA Developer Forums