Ladies & Gentlemen around here,

I am currently interested in implementing double precision emulation in CUDA; not fully, but limited to subtraction only. That is, I have two double precision numbers, for example x and y, and I need to know the difference z = x-y. The number x & y *must* be in double precision, but the difference can be single precision. In this way, I do not loose significant digits in z.

As we all now, CUDA currently does not support double precision arithmetics, but for some scientific calculations it is crucial to perform some arithmetics, especially differences between large numbers, in double precision; otherwise, the result will have a very large error! Gladly, not all code has to be single precision, and most of the time these differences can be stored as a single precision number (though the actual difference must be in computed in double precision, and variables whose difference is required must also be in double precision). Performance impact won’t be large, as far as CUDA kernel is bandwidth bound.

I am aware of softfloat library (http://www.jhauser.us/arithmetic/SoftFloat.html) for CPU, and I was wondering if anybody has experience in emulating double precision in CUDA.

Not much have to add, but any comments & suggestions are welcome.

Cheers,

Evgheni