I’m trying to test the performance of TX2 under different quantized TensorFlow models of TFlite formate. The frozen .pb can directly run on TX2, but uint8 and float16 TFlite models cannot run directly on TX2. I have tried some methods but I only get bugs, so here I want to know if TX2 do support TFlite and if there are some references? Thanks for your help.
How do you run your model? TensorFlow or TensorRT?
If you are using TensorFlow, which package do you install?
I used TensorFlow installed from JetPack Nvidia SDK manager of TX2, and its version is 1.14.0.
So this issue should be if TensorFlow standard support TensorFlow lite model.
It’s recommended to check the TensorFlow team for a better answer.
The package from JetPack is built from TensorFlow standard branch.
I can run the TFlite model with TensorFlow CPU, but when I run it with TX2 (GPU or CPU), I get some error:
RuntimeError: tensorflow/lite/kernels/dequantize.cc:62 op_context.input->type == kTfLiteUInt8 || op_context.input->type == kTfLiteInt8 was not true.Node number 0 (DEQUANTIZE) failed to prepare
So I come here to find if TX2 TensorFlow supports TFlite models.
Sorry to keep you waiting.
Is this can also be reproduced with our latest TensorFlow version?
If yes, would you mind to share a TFlite and a simple script to reproduce this issue.
We will pass this issue to our internal team for suggestion.
Sorry for not responding.
I am focusing on some other issues now, so when I turn back to tx2 TFlite, I will try your advice.