Pinned memory with textures

Hi,

Has anyone tried to use textures with Pinned memory?

I get a “Invalid device pointer” after the cudaBindTexture when I do this:

texture<float2, 1, cudaReadModeElementType> tex_LargeFloat2;

cudaSetDevice(1);

cudaSetDeviceFlags(cudaDeviceMapHost);

unsigned int flags = cudaHostAllocMapped;

float *fData_d = NULL;

cudaHostAlloc((void **)&fData_h, iSize * sizeof( float ), flags );

cudaThreadSynchronize();

cudaHostGetDevicePointer((void **)&fData_d, (void *)fData_h, 0);

cudaThreadSynchronize();

cudaBindTexture( 0, tex_LargeFloat2, fData_d, iSize );

cudaThreadSynchronize();

Thanks

eyal

You cannot bind a texture to host mapped memory.

It can be done by allocating mapped pinned memory (cudaHostAlloc() with the cudaHostAllocMapped flag - you have to have called cudaSetDeviceFlags() with cudaDeviceMapHost), and then binding the texture to the device pointer that you obtain via cudaHostGetDevicePointer().

The performance is worse than reading mapped pinned memory via load/store, though.

Interesting, that is exactly what eyal is trying.

While I cannot find the post now, I’m positive tmurray said that this would not work…

Yes thats exactly what I’m doing… nwilt - any idea whats wrong there?

In anycase if the performance is worse - than i guess its out of the question…

pitty could have saved me a lot of work - now I have to break the huge data into smaller

datasets instead of accessing it with the texture.

eyal