Difference between devdriver and standard driver


I am a trainer at Applied Parallel Computing LLC (http://parallel-compute.com/), NVIDIA partner. During our CUDA trainings, I’ve heard that question many times - “What is the exact difference between standard driver and devdriver, do I really need the latter?” - and I have to accept that yet I couldn’t give exact answer.

Additionally, I’m a packager for Mageia Linux (http://mageia.org), where I maintain CUDA Toolkit package. Yet we do not ship a devdriver in Mageia (just a standard one). I want to figure out, whether we really need to ship a devdriver, and, if yes, how do we substantiate that. For instance, the package should contain a description in form of “Install this package if you intend to blah-blah-blah”.

In both cases, a simple answer “you need a devdriver to use CUDA” won’t work, because it is simply not true. I’ve been able to compile and run CUDA programs since early versions of CUDA till recent 5.0, using standard driver. So, what’s the exact purpose of devdriver and in which cases should we convince the user to switch to it?


As long as the version requirements are met (ie. the driver one uses has is a version equal to or greater than the minimum required version), you should be good. There should be no difference between the standard “Geforce Driver” and the “CUDA Driver”.