Memory usage difference between Jetson Nano vs PC loading the same model

Hello, I am loading my customized Tensorflow model in savedmodel format both in a PC and Jetson Nano.

I am loading the model as the following:

root@desktop:/# python3
Python 3.6.9 (default, Jan 26 2021, 15:33:00) 
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> model = tf.saved_model.load('./model')

For the tests in PC, I am using the official Tensorflow docker image with no GPU support (CPU only): tensorflow/tensorflow:2.5.0

For the tests in Jetson Nano, I am using NVIDIA docker image with the same Tensorflow version:

However, loading exactly the same model files (without optimizing it) I see that the models uses 2x times more memory in Jetson Nano than in PC for my use case (about 600 mb in PC vs 1,4 GB in Jetson Nano)


user@desktop:~$ ps aux | sort -rnk 4 | head -1
root      103954 31.7  1.9 6223268 627804 pts/0  Sl+  17:47   0:06 python3

Jetson Nano:

user@jetson:~$ ps aux | sort -rnk 4 | head -1
root      4156 39.3 35.7 12001056 1450352 pts/0 Sl+ 17:51   1:47 python3

I tried to optimize the model with TF-TRT (using the instructions from here ), however, the memory usage didn’t change much.

Can you provide me some hints about what I can do to investigate what is making the model to use so much memory when running on Jetson Nano?

Thank you!


AFAIK, TensorFlow will duplicate the model to GPU.
Since Nano is a shared memory system, you will see the memory usage becomes twice compared to pure CPU mode.
It’s expected to see similar behavior on the host if GPU mode is used.

More, please noted that TF-TRT is an optimizer for performance.
If you want to save some memory, it’s more recommended to convert your model into a pure TensorRT engine.


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.