Relationship between gpu memory and batch-size deep learning algorithm

hello
I use yolov5 algorithm.the gpu is rtx5000.input image size is 320x640.and test batch-size 1 to 100.detailed information the following:

batch memory(MB) cpu(%) time(ms)
1 1076 103-8.6 8.6
5 1180 103-8.6 17.4
10 1312 103-8.6 30.9
15 1546 103-8.6 43.6
20 1842 103-8.6 60.1
25 3024 103-8.6 92.4
30 3676 103-8.6 103.9
35 4040 103-8.6 131.1
40 4580 103-8.6 145.9
45 5182 103-8.6 162.3
50 5956 103-8.6 181.2
55 6668 103-8.6 195.6
60 7444 103-8.6 212.4
65 8390 103-8.6 244
70 9282 103-8.6 262
75 10232 103-8.6 280.3
80 11240 103-8.6 304.1
85 12420 103-8.6 322.3
90 13548 103-8.6 339.4
95 14732 103-8.6 358.1
100 15302 103-8.6 402.2

My question is why memory of batch-size(60) and batch-size(100) is out of proportion.Memory 15302MB of batch-size(100) is more than expected.

Please clarify what you mean by “out of proportion”. Where is the “memory” data coming from and what does it represent?

What did you expect and why?

The entire data set is not exactly described by a straight line, it is a slightly S-shaped curve. No set of experimental data is going to match a linear regression exactly. Here, it may come down to memory allocation granularity. If I fit the data for batch size in [40, 100], the linear fit is roughly memory = 186 * batch - 3400, and all deviations seem minor.