We are using TensorRT ( .pb) model and Keras (.hdf5) model for object detection on the Nvidia Jetson TX2 board, and both of the model size is same, which is 167MB.
The issue which we are facing is, when we use the Keras model for detection, RAM, GPU, and CPU usages are normal, but if we use the TensorRT model for detection, usage of RAM is more, when compared with Keras model, GPU and CPU usages are normal.
Below is the output Tegrastats command
1.While running the Keras model
S/M RAM usage - 1686 / 7852 ( Used / Total)
CPU usage - 21.47 %
Max GPU usage - 64 %
2.While running the TensorRT model
S/M RAM usage - 5615 / 7852 ( Used / Total )
CPU usage - 18.49 %
Max GPU usage - 40 %
Kindly help us to fix/reduce the RAM usage when the TensorRT model running.
Below I am mentioning package versions which we used on TX2
Jetpack - 4.4
CUDA - 10.2
Tensorflow - 2.2.0
Keras - 2.4.3
Libcudnn - 8.0
Linux ubuntu - 18.04
TensorRT - 7.1.3