Titan V Can't Reach the Maximum TDP with gpuburn does

Hi All,

Use the NVIDIA Geforce Linux Driver 387.34 and checking the Titan V with gpu_burn, found the Maximum TDP won’t get the 250W, I’ve tried with the Persistence Mode disabled and manually tune up the Application Clock by “nvidia-smi -ac 850,1912”, but the Titan Xp didn’t have this problem with the same version of Driver equipped, I think that’s problem based on the card, may you have similar problem by this? Thanks!

V100 uses a more energy efficient 12nm FinFet manufacturing proces

Maybe unlike Pascal cards, just loading the the FP32/FP64 ALUs is not enough to max out the TDP on this card.

The V100 chip (also found in the Titan-V) spends a lot of silicon area on the new Tensor core units. Maybe the gpuburn tool would have to be extended to also use the Tensor cores for reaching those 250W?

It could also be that the gpuburn utility has not yet been extended to correctly identify Compute 7.0 devices, and it may fail to put enough load (sufficient # of thread blocks) on the device to fully load it.

How is using less power a “problem”, unless you want to use it to heat your home?

I think this is the first time I am encountering a “complaint” that a processor’s power draw is too low!

From first-hand knowledge, writing a “max power” test application for a specific processor generally requires careful fine tuning, in particular with regard to the optimal balance between memory activity and computation; also operation parallelism. On a per-operation basis, moving data is a lot more expensive energetically than doing computation these days. So unless your existing app has already been tuned for the V100, you likely won’t be able to hit some empirically determined “maximum” power.

It should also be noted that modern semiconductor processes have quite a bit of manufacturing tolerance (if an insulator is just a dozen atom layers thick, under-etching or over-etching by just one atom layer already makes a noticeable difference). This results in “cold” and “hot” parts in the fab output, and without access to a large-ish sample, it is impossible to know what the distribution looks like. The difference could be on the order of 10%.

In addition, the power sensors on the GPU are not high-precision sensors. Unless specified otherwise by NVIDIA, you might want to assume they are accurate to about +/- 5%.

The Titan V can be used for both compute and graphics applications. Graphics applications utilize hardware not used by compute apps, such as raster units, while compute apps utilize hardware not used by graphics, such as shared memory. A TDP rating has to be determined across a large universe of both graphics and compute applications. It is possible (not sure how likely) that you would need a graphics workload to get close to the specified TDP.

Hi All,

Thanks for recommendation about this, From Experiment, The highest TDP value for Titan V under gpuburn stressing is around 190W, and I’ve noticed that the Volta GPU will have the Tensor Core which seems not let the GEMM use, how about with the tensor core be stressed with any way, many thanks!