Hi, all. I have trained a detection model on 1080Ti, with PyTorch 1.0.1. The GPU memory in inference is about 500M. However, when I run the same model on 2080Ti, the GPU memory increases to about 850M. What is the reason and how can I reduce the extra ~350M memory in 2080Ti?
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Memory of GPU increase so much when changing from 1080ti to 2080ti | 0 | 413 | June 23, 2020 | |
| GPU Memory on K80 vs V100 | 0 | 1151 | August 11, 2020 | |
| A huge difference of memory usage on different GPU | 1 | 678 | July 21, 2021 | |
| New TensorRT Model occupying more GPU Memory as compared to older version | 8 | 2084 | August 20, 2021 | |
| Strange behavior of titan v | 1 | 621 | August 25, 2019 | |
| Increased GPU memory footprint with Ampere architecture | 1 | 507 | December 20, 2022 | |
| GPU memory difference between 1070 and 2070 for YOLOv3 | 3 | 675 | May 21, 2020 | |
| int8 mode is different between 1080ti with 2080ti | 0 | 539 | September 3, 2019 | |
| The same model consumes different sizes of GPU memory in different GPU | 8 | 1838 | August 8, 2022 | |
| Difference of memory usage at each GPU model during tensorflow c++ inference | 3 | 1821 | November 20, 2019 |