Hello! Probably my question is stupid, but I can’t bind a texture to any linear memory (1D allocated with cudaMalloc or 2D allocated with cudaMallocPitch).
Only bind to CUDA-array works correctly.
Examples from 4.0 documentation for linear memory don’t work for me too. My OS is Ubuntu 10.10,64-bit.
The code is:
#include <cuda_runtime.h>
#include <cutil_inline.h>
float devPtr;
size_t size=64sizeof(float);
CUDA_SAFE_CALL(cudaMalloc((void **) &devPtr, size));
texture<float, cudaTextureType1D, cudaReadModeElementType> texRef;
cudaChannelFormatDesc channelDesc = cudaCreateChannelDesc();
cudaError_t err=cudaDeviceSynchronize();
printf(“Error before bind: error code=%d (%s)\n”,err,cudaGetErrorString(err));
err = cudaBindTexture(NULL, &texRef, devPtr, &channelDesc, size);
printf(“Error after bind: error code=%d (%s)\n”,err,cudaGetErrorString(err));
Result is:
Error before bind: error code=0 (no error)
Error after bind: error code=18 (invalid texture reference)
If I call the cudaBindTexture with the same parameter as in documentation (just texRef, not &texRef) than compiler gives error:
no instance of overloaded function “cudaBindTexture” matches the argument list
argument types are: (long, texture<float, 1, cudaReadModeElementType>, float *, cudaChannelFormatDesc *, size_t)
What is wrong in my code? Any hint is appreciated.
Thanks.