Hi both of them,
I am a beginner in cuda. I want to test how the double precision works.
I have written a code similiar like the code before. It only makes the FFT of a signal and then scales it.
To make the FFT, I use a vector of cufftDoubleComplex which read the arguments from a file where the values are in ascii:
cufftDoubleComplex* x_t;
FILE * fp;
x_t = (cufftDoubleComplex*)malloc(iMuestras*sizeof(cufftDoubleComp
lex));
fp = fopen(“…/datos_entrada/senal_pruebaB.txt”, “r” );
while (fgets(c, 40, fp)!=NULL)
{
h_t[iReadValues].x=atof©;
h_t[iReadValues].y=0;
iReadValues++;
}
I reserve a memory in device…
… then…
// CUFFT plan
cufftHandle plan;
cufftSafeCall(cufftPlan1d(&plan, new_size, CUFFT_Z2Z, 1));
cufftSafeCall(cufftExecZ2Z(plan, (cufftDoubleComplex *)d_signal, (cufftDoubleComplex *)d_signal, CUFFT_FORWARD));
I write the Makefile like:
Add source files here
EXECUTABLE := test
CUDA source files (compiled with cudacc)
CUFILES := test.cu
FLAGS para compilar en doble precision
CFLAGS=‘-arch sm_13’
C/C++ source files (compiled with gcc / c++)
CCFILES :=
Additional libraries needed by the project
USECUFFT := 1
I compile, with MAKE, and it compiles perfectly and the applications makes FFT perfectly.
HOWEVER, when I intruce the function scale:
main:
int block_size=128;
dim3 dimBlock(block_size,1);
dim3 dimGrid ( (new_size/dimBlock.x) + (!(new_size%dimBlock.x)?0:1) , 1);
// Scale the result
ComplexPointwiseMulAndScale<<<dimGrid, dimBlock>>>(d_signal, 1.0f/new_size);
Function:
// Scale FFT values
static global void ComplexPointwiseMulAndScale(cufftDoubleComplex* a, double scale)
{
const int threadID = blockIdx.x * blockDim.x + threadIdx.x;
a[threadID].x = a[threadID].x *scale;
a[threadID].y = a[threadID].y *scale;
}
When I compile, The compiler says:
ptxas /tmp/tmpxft_00004bcb_00000000-2_convolutionOVS.ptx, line 70; warning : Double is not supported. Demoting to float
and the application does not work.
I am using a TESLA C0160 with Capacity 1.3, CUDA 2.3, It should work, shouldn’t it ???. What should I change?
Thank you very much,
jabelloch