Turning on DP for Nvidia Titan on Headless Server?

Hi all,

I was wondering if anyone had figured out how to turn on double precision performance for the new Titan card without going through nvidia-settings, which to the best of my knowledge requires X Server.

I tested and am running 2.077 teraflops single precision on the nbodies test (numbodies=229376) but only 178 GFLOPS double precision, which considering the numbers in the 800s some members were getting here, suggests the card is probably running the typical gamer’s Kepler 1/24 regime and not the 1/3 that Titan offers.

I found this guide here https://sites.google.com/site/akohlmey/random-hacks/nvidia-gpu-coolness which shows how to fake an X Server to use nvidia-settings but it seems a bit complex and super hacky for something that should be a simple setting.

I mean, I know there is a nvreg param to set pci speeds to 3.0. Is there something similar for double precision performance, or is there some other way to do this?

In case it matters, I’m running ubuntu 12.10 server, sandy bridge-e 3930K, Asus P9X79 Deluxe MB with two EVGA Signature Titans.

Thanks in advance for your help!

Ideally DP mode could be enabled via nvidia-smi or alternatively via a kernel module parameter. There is already an option there for Tesla K20/x in nvidia-smi. However, NVIDIA has not added support for the Titan to nvidia-smi. It would be handy if NVIDIA could port some of the nvidia-settings features to nvidia-smi for Geforce cards so that those of us that run CUDA or OpenCL without X installed can still control the features of our cards.

I assume you are talking about the --gom option? If you try this with a Titan you get this:

sudo nvidia-smi -i 0 --gom=1
GOM features not supported for GPU 0000:01:00.0.
Treating as warning and moving on.
All done.

Man Nvidia, you could at least be classier about something you should be doing in the first place - particularly when you enable high-performance double precision via the front door and run ads about how the Titan is the perfect CUDA dev card.

I look forward to them fixing this “glitch”.

I was trying to figure out how it’s set in Windows, but I couldn’t figure it out with Procmon… but I wasn’t being too detailed maybe. I’m guessing it’s somewhere in the registry because when I flash a different BIOS I have to re-enable it again… otherwise it persists through reboots. I looked at nv-reg.h in the latest driver that supports Titan and didn’t see any NVreg parameter that set DP support. I’d like to extend this as a kernel module option feature request as well. :)

Edit: it just occured to me that just like there is a source for the kernel driver, there might be one for nvidia-settings? Presumably the source code will tell you point-blank how to enable it.

Edit 2: There is… ftp://download.nvidia.com/XFree86/nvidia-settings/
Looking through parse.c in src directory I see:

More as I go through it…

In src/libXNVCtrl/NVCtrl.h:

/*
 * NV_CTRL_GPU_DOUBLE_PRECISION_BOOST_IMMEDIATE
 * Some GPUs can make a tradeoff between double-precision floating-point
 * performance and clock speed.  Enabling double-precision floating point
 * performance may benefit CUDA or OpenGL applications that require high
 * bandwidth double-precision performance.  Disabling this feature may benefit
 * graphics applications that require higher clock speeds.
 *
 * This attribute is only available when toggling double precision boost
 * can be done immediately (without need for a rebooot).
 */
#define NV_CTRL_GPU_DOUBLE_PRECISION_BOOST_IMMEDIATE            395 /* RW-G */
#define NV_CTRL_GPU_DOUBLE_PRECISION_BOOST_IMMEDIATE_DISABLED     0
#define NV_CTRL_GPU_DOUBLE_PRECISION_BOOST_IMMEDIATE_ENABLED      1

/*
 * NV_CTRL_GPU_DOUBLE_PRECISION_BOOST_REBOOT
 * Some GPUs can make a tradeoff between double-precision floating-point
 * performance and clock speed.  Enabling double-precision floating point
 * performance may benefit CUDA or OpenGL applications that require high
 * bandwidth double-precision performance.  Disabling this feature may benefit
 * graphics applications that require higher clock speeds.
 *
 * This attribute is only available when toggling double precision boost
 * requires a reboot.
 */

#define NV_CTRL_GPU_DOUBLE_PRECISION_BOOST_REBOOT              396 /* RW-G */
#define NV_CTRL_GPU_DOUBLE_PRECISION_BOOST_REBOOT_DISABLED       0
#define NV_CTRL_GPU_DOUBLE_PRECISION_BOOST_REBOOT_ENALED         1

You can do…

In my case, gpu:0 is driving my display and gpu:1 is the GTX Titan… yours might be gpu:0 (or some other number) Of course, that still means you’re dependent on nvidia-settings, but it does work to set/unset, I tried it.

Thanks vacaloca! I downloaded the source - this is very interesting (cool to see all the other parameters you can set/query as well). However, I might be a bit dense here, but this would still require me to set up an X server to use nvidia-settings right?

I believe so, yes… you’d still need an X server. There might or might not be a way to de-couple that particular setting, but since that code is so interdependent on various includes/libraries it’s hard to track down what routine is actually being called to set that flag and what it actually depends on/where it’s being stored. Then again I only looked at it for a bit, so perhaps someone else can comment.

The actual setting change is done in src/gtk±2.x/ctkpowermizer.c for anyone that’s curious to see if this DP option toggle can be isolated.