we are three computer engineering students at the Technische Hochschule Mittelhessen in Germany and have tested a couple of nvidia cards in different AIBench (AIBench Micro, BenchCouncil) microbenchmarks (Convolution, Fully Connected, Maxpooling and AvgPooling) and evaluated these tests mainly for computing time and energy consumption.
The different cards include: GTX 1070 Ti, GTX 1660 Super, RTX 2070 Super and a RTX 3070 Ti. We have visualized a part of our collected data on a website for a quick overview (Projektvisualisierung). Our expectations before measuring were that the GTX 1660 Super would be the most energy efficient card with the RTX 3070 Ti being the fastest.
After running the benchmarks (using CUDA 10.0 and Tensorflow GPU 1.15) we saw that the GTX 1660 Super was the fastest card of them all and with its low power consumption it was also the most energy efficient one. After running through different scenarios (using CUDA 11.4 for the RTX 3070 Ti, different screen resolutions or minimizing background usage) the GTX 1660 Super was still the fastest card. Due to some restrictions the benchmarks with the different cards were run on different systems. The only correlation between computing speed and graphics card performance across the systems could be the frequency of the RAM.
If you want to go into further detail all benchmark files can be downloaded on the same website.
What do you think of the surprising results? How could we improve? How can the GTX 1660 Super be faster than the RTX 3070 Ti?
Thanks in advance.