Hello, I am loading my customized Tensorflow model in savedmodel format both in a PC and Jetson Nano.
I am loading the model as the following:
root@desktop:/# python3
Python 3.6.9 (default, Jan 26 2021, 15:33:00)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> model = tf.saved_model.load('./model')
For the tests in PC, I am using the official Tensorflow docker image with no GPU support (CPU only): tensorflow/tensorflow:2.5.0
For the tests in Jetson Nano, I am using NVIDIA docker image with the same Tensorflow version: nvcr.io/nvidia/l4t-tensorflow:r32.6.1-tf2.5-py3
However, loading exactly the same model files (without optimizing it) I see that the models uses 2x times more memory in Jetson Nano than in PC for my use case (about 600 mb in PC vs 1,4 GB in Jetson Nano)
PC:
user@desktop:~$ ps aux | sort -rnk 4 | head -1
root 103954 31.7 1.9 6223268 627804 pts/0 Sl+ 17:47 0:06 python3 inference-server.py
Jetson Nano:
user@jetson:~$ ps aux | sort -rnk 4 | head -1
root 4156 39.3 35.7 12001056 1450352 pts/0 Sl+ 17:51 1:47 python3 inference-server.py
I tried to optimize the model with TF-TRT (using the instructions from here ), however, the memory usage didn’t change much.
Can you provide me some hints about what I can do to investigate what is making the model to use so much memory when running on Jetson Nano?
Thank you!