As soon as we have dynamic clocking that is based on a GPU’s temperature and power draw, it cannot be made consistent across cards, as GPUs of the same type may operate at different temperatures (in a rack, the machines at the top generally tend to be warmer than those at the bottom), and manufacturing tolerances lead to variations in power draw. These manufacturing variations have increased as silicon feature sizes have shrunk: etching away +/- 2 atom layers matters less when considering 50 atom layers for a feature versus 10 layers.
Application clocks can address this partially by dialing up a fixed clock for all GPUs, but there is still a possibility of hitting power or thermal limits that force throttling if the chosen AC is high. Running all cards at default clocks, without any clock boosting, is one way of ensuring that all cards run consistently, without throttling, under all workloads, as long as vendor-specified environmental operating conditions are met.
Market differentiation is one way to recoup the incremental cost of offering “compute”. As discussed in a forum thread just the other day, there are GPU hardware features that are, exclusively or predominantly, only needed for “compute”, such as shared memory or double precision units. That is added hardware cost. Then there is a large NRE cost for software. Other processor companies ask their customers to pay considerable sums for their compilers, tools, and libraries, which is a competing approach of defraying that cost.
Apparently, market differentiation rubs some the wrong way when it is achieved with the help of software, they speak of “crippling”. The logical conclusion seems to me that once overall volume can justify the cost of doing so, it is preferred to bake the market differentiation into the hardware. While the end effect on cusomers is largely the same (more features cost more money), it fixes the PR issue. This may be what we are observing with Maxwell, at least I tend to think that is the case.