For example I take some TF2.x model and convert it using TRT (Convert operation in TF2.x). Then I save it using converter.save.
Whe I try to load this model (for example tf.saved_model.load) on AXG I obtained very slow loading time (the model is CNN, it has 10 M parameters and it requires about 50 Mb on the disk-space) - about 5-6 min…
Whether some way to load this model faster? Maybe must I d some additional convertations?
convert TF model to 16 bit format (I get th enew folder with pb file).
Then I use tf.saved_model.load function to load the models. Each of them loads very slow.
More, which TensorFlow package do you use?
If you are not installing our prebuilt, could you give it a try?
More, based on your description that you are using TF-TRT.
For Jetson’s limited resources, it’s more recommended to use pure TensorRT.
Is this an option for you?