I used Tensorrt’s python api to load the swin-tiny segmentation model on different hardware, and found that the memory size occupied by the model on the host side was different. What is the reason for this
My guess here is that the 3080ti has stronger computing power, the corresponding data throughput is higher, and the memory usage of the model is higher. I wonder if this guess is correct?
Looking forward to your reply, good luck!