tex1Dfetch with unsigned long error: no instance of overloaded function "tex1Dfetch" matches

I’m having problems porting a program from Windows to Linux. Everything compiles and runs fine in Windows but when I try on Linux I get

The code I’m running:

[codebox]texture <unsigned long, 1, cudaReadModeElementType> k_tex;

cudaMalloc((void **) &k_d, sizeof(unsigned long)*m);

// … Copy data to k_d

cudaBindTexture(0, k_tex, k_d, sizeof(unsigned long) * m);

// … In kernel

k = tex1Dfetch(k_tex, tid);[/codebox]

Are unsigned longs just supported in Windows and not Linux?

use this code…

its compiling and working fine on my 8400GS with CUDA2.2

[codebox]texturehTexLong;

global

void TestKernel1( unsigned long* array1, int limit )

{

int idx = __umul24(blockIdx.x,blockDim.x) + threadIdx.x;

if ( idx < limit )

	int index = tex1Dfetch(hTexLong, idx);

}

//==============

// main()

//==============

int main( int argc, char** argv)

{

cutilDeviceInit(argc, argv);

unsigned long* dArray1=NULL;	

int width = 2400;	

int height = 1800;	

TIMEPARAMS;

cudaMalloc( (void**)&dArray1, sizeof(int)*width*height );	

cudaMemset( dArray1, 0, sizeof(int)*width*height );	

cudaBindTexture(0, hTexLong, dArray1, sizeof(int)*width*height);

dim3 grid1( ((width*height)+255)/256,1,1);	

dim3 block1( 256,1,1);	

STARTTIME;

TestKernel1<<<grid1,block1>>>(dArray1, width*height);

cudaError error = cudaThreadSynchronize();

STOPTIME;

printf("\n TestKernel() exe time: %2f <ms> \n", exeTime );

cudaUnbindTexture(hTexLong);



cudaFree(dArray1);		



cutilExit(argc, argv);   

}[/codebox]

I still get the exact same error. Do you have this problem if you run the kind of code I first posted?

This question is quite old, but i found this post on top of the list when i searched for an answer having the same problem. So answering this may help others: unsigned long has different sizes on a 64 Bit Windows (4 Byte unsigned long) and a 64 Bit Linux (8 Byte unsigned long). As the internal GPU texture hardware only supports 32 Bit the unsigned long texture functions are disabled when compiling under 64 Bit Linux. Using unsigned int instead resolves the problem as unsigned int has 32 Bit (4 Bytes) size on both systems.

How i found this? As the SDK Include file texture_fetch_functions.h contains an unsigned long tex1Dfetch function definition in Linux as well I compiled with nvcc -E to see the result of the Preprocessor. The unsigned long tex1Dfetch function was missing in Linux while unsigned int tex1Dfetch was still there. This lead me to the #if !defined(LP64) statement in texture_fetch_functions.h which gave me the final clue for the solution.

1 Like