Hi, I’ve just observed that a same model occupies very different amount of ram depending on the platform. For example:
Platform 1:
Jetson Nano running with JetPack 4.3
OpenCV 4.5.0 with cuda support enabled
Yolov3 loaded via cv2.readFromDarknet
Platform 2:
Jetson Xavier NX running Jetpack 4.5.1
OpenCV 4.5.2 with cuda support enabled
Yolov3 loaded via cv2.readFromDarknet
On platform 1 each instance occupies 1.5 Gbytes of memory, while on platform 2 each instance occupies almost 3 Gbytes. Is this normal? I would have expected maybe some small difference, but not twice as much memory on one platform compared to the other. Anyone has any advice on how to tell what may be the reason behind this difference?
On none of these setups I’m using tensorRT or anything, just a model loaded via cv2.dnn module.
I mention this because I’ve seen other threads about different memory footprints that circled around TensorRT optimizations, but this would not be the case either.
Thank you.
Best regards,
Eduardo