I have this problem.
For example I take some TF2.x model and convert it using TRT (Convert operation in TF2.x). Then I save it using converter.save.
Whe I try to load this model (for example tf.saved_model.load) on AXG I obtained very slow loading time (the model is CNN, it has 10 M parameters and it requires about 50 Mb on the disk-space) - about 5-6 min…
Whether some way to load this model faster? Maybe must I d some additional convertations?
Please noted that TensorRT optimization is hardware-dependent.
Could you do the conversion on Xavier again?
Yes, I made all operation on Xavier:
- create TF model (I get the folder with pb file),
- convert TF model to 16 bit format (I get th enew folder with pb file).
Then I use tf.saved_model.load function to load the models. Each of them loads very slow.
Have you maximized the device performance first?
$ sudo nvpmodel -m 0
$ sudo jetson_clocks
More, which TensorFlow package do you use?
If you are not installing our prebuilt, could you give it a try?
More, based on your description that you are using TF-TRT.
For Jetson’s limited resources, it’s more recommended to use pure TensorRT.
Is this an option for you?
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.