texture memory binding gives invalid argument error

I am trying to use the 1D texture memory to bind it to some device memory location,
but experiencing the following error.

texture<float, 1> tex_scratch;
cudaMalloc(&d_scratch, sizeof(float)*m); Where m could be any positive value > 1.

When I use cudaBindTexture(NULL, tex_scratch, d_scratch), it works fine.

But, if I try to bind the texture memory with some offset(n) in the
d_scratch device memory (cudaBindTexture(NULL, tex_scratch, (d_scratch + n)); where n >1 and n < m), then,
it gives the error “invalid argument”.

Does the texture memory require some boundary alignment to be met with the device memory location to be able
to bind to it properly?

I have an experimented code, and it worked fine.
int devIn = NULL;
int elements = 20;
*)&devIn, sizeof(int) * elements * 2);
cudaMemcpy(devIn, hostIn, sizeof(int) * elements * 2, cudaMemcpyHostToDevice);
size_t offset0;
cudaBindTexture(&offset0, texRef1, devIn, sizeof(int) * elements);
size_t offset1;
cudaBindTexture(&offset1, texRef2, devIn + elements, sizeof(int) * elements);

Thanks, the problem was that I was passing the NULL value in the offset field of the cudaBindTexture().
Your other post at http://forums.nvidia.com/index.php?showtopic=95208 was also very helpful in
using text1Dfetch() properly (cudaBindTexture() was returning some finite positive value for the offset).