Tensorrt runtime creation takes 2 Gb when torch is imported

My Device:

Name - Jetson Nano 4 gb
Jetpack - 4.5
Tensorrt - 7.1.3
Pytorch - 1.9.0

I am trying to load my tensorrt engine but freezes due to memory usage. I was exploring what caused my RAM to be exploded and found out that when I create tensorrt engine runtime with torch import my memory gets almost filled.

import tensorrt as trt
TRT_LOGGER = trt.Logger([trt.Logger.INFO](http://trt.logger.info/))
runtime = trt.Runtime(TRT_LOGGER)

The above code snippet takes only 200-300 mb RAM and that’s exected. But when I try the below code RAM is increased by 2 GB.

import tensorrt as trt
import torch
TRT_LOGGER = trt.Logger([trt.Logger.INFO](http://trt.logger.info/))
runtime = trt.Runtime(TRT_LOGGER)

Only difference between two snippets are torch import statement is added in later snippet.

Please let me know how to resolve this? oe Is this expected behaviour?

Note: I installed torch using this link

Hi,

Thanks for reporting this.
We will give it a try and share more information with you later.

Thanks

1 Like

Hi,

We cannot reproduce this issue with Nano+JP4.5.1+PyTorch1.9.0.

When running import torch on the console, the used memory increase from 1072MB to 1142MB.
And it is much smaller than 2GB.

Could you double-check it again?
Thanks.

Hi,
I will check again on my machine. Could you tell me how did you install torch?

Hi,

We install PyTorch v1.9.0 from the same topic of you shared:

Thanks.