Different memory footprint for same model in nano vs xavier nx?

Hi, I’ve just observed that a same model occupies very different amount of ram depending on the platform. For example:
Platform 1:
Jetson Nano running with JetPack 4.3
OpenCV 4.5.0 with cuda support enabled
Yolov3 loaded via cv2.readFromDarknet

Platform 2:
Jetson Xavier NX running Jetpack 4.5.1
OpenCV 4.5.2 with cuda support enabled
Yolov3 loaded via cv2.readFromDarknet

On platform 1 each instance occupies 1.5 Gbytes of memory, while on platform 2 each instance occupies almost 3 Gbytes. Is this normal? I would have expected maybe some small difference, but not twice as much memory on one platform compared to the other. Anyone has any advice on how to tell what may be the reason behind this difference?

On none of these setups I’m using tensorRT or anything, just a model loaded via cv2.dnn module.
I mention this because I’ve seen other threads about different memory footprints that circled around TensorRT optimizations, but this would not be the case either.

Thank you.
Best regards,


Would you mind aligning the JetPack version first?

Please noted that cuDNN is v7.6 in JetPack4.3 but v8.0 in JetPack4.5.1.
Since it is the major release upgrade, the required memory and behavior will be very different.


Ok, I’ll do that now. Will keep you posted.
Thank you.
Best regards.

Hi, is there any 4.3 release of JP for NX? The oldest I could find is 4.4. I’m still setting things up there, but the best test would have been to try 4.3 on both platforms, since it’s where it has apparently the smaller memory footprint.


We start to support XavierNX from JetPack 4.4.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.