Help! problem with new SDKs and Drivers

Hi,

Have bumped into a serious issue with my CUDA program! I’ve started everything on SDK 2.2 and driver 185.x a year ago and finally I got my program working as it should.
But now since both the SDK and driver I work up my program with are a bit old and therefore before releasing the software I tried the run the same software on a new version of the driver 256.x. But now it has decided it’s not good and does not run anymore (Invalid argument). I thought well it might be just the SDK I used was too old so I tried 2.3 - 3.0 but it can only work partially.

Could anyone share some lights with me about what might have gone wrong?

Many thanks!

Brian

Found the cause of the problem, it’s the texture binding which is causing the problem, here is the code:

texture<uint4, 1, cudaReadModeElementType> bwt_occ_array;

	//Allocate memory for bwt string

	CUDA_SAFE_CALL(cudaMalloc((void**)bwt, bwt_src->bwt_size*sizeof(unsigned int)));

	//copy bwt string from host to device 

	CUDA_SAFE_CALL(cudaMemcpy (*bwt, bwt_src->bwt, bwt_src->bwt_size*sizeof(unsigned int), cudaMemcpyHostToDevice));

	//bind global variable bwt to texture memory bwt_occ_array

	CUDA_SAFE_CALL(cudaBindTexture(0, bwt_occ_array, *bwt, bwt_src->bwt_size*sizeof(unsigned int)));

This works perfectly fine with old driver 185.18.36. But when I use new drivers, I couldn’t manage to get this to work with large bwt (1.3GB) but smaller (up to 300MB) bwts are fine

Hi Brian and hi all!

I seems that I have encountered the same issue. I started to develop my CUDA application on CUDA driver & toolkit 2.0 or 2.1. My application works perfectly on CUDA up to version 2.3 - so I can say there is no bug in the application itself.
But after upgrading my CUDA driver & toolkit to 3.0/3.1/3.2 it returns errors: “invalid argument”. My observation is that these errors occur when the program uses a lot of global memory (it appears that usage of texture is not the cause in my case). Of course I don’t exceed the amount of memory available - as I said the program itself is correct.

Does anybody came across the same/similar problem when upgrading CUDA to 3.0 or newer version? Maybe someone from NVIDIA staff could explain the cause of the problem or at least point out possible reasons?

Best regards,
Michał

Hi Brian and hi all!

I seems that I have encountered the same issue. I started to develop my CUDA application on CUDA driver & toolkit 2.0 or 2.1. My application works perfectly on CUDA up to version 2.3 - so I can say there is no bug in the application itself.
But after upgrading my CUDA driver & toolkit to 3.0/3.1/3.2 it returns errors: “invalid argument”. My observation is that these errors occur when the program uses a lot of global memory (it appears that usage of texture is not the cause in my case). Of course I don’t exceed the amount of memory available - as I said the program itself is correct.

Does anybody came across the same/similar problem when upgrading CUDA to 3.0 or newer version? Maybe someone from NVIDIA staff could explain the cause of the problem or at least point out possible reasons?

Best regards,
Michał

The answer appears to be quite simple. The size of a single texture can’t exceed 2^27 elements.

In my application one array may be pretty huge but CUDA 2.X allowed to bind such a huge textures - errors occured only when the data was fetched (e.g. using tex1Dfetch) - I knew about this. Whereas fresh CUDA 3.X seems to “reject” even cudaBindTexture with too many elements.

Michał

The answer appears to be quite simple. The size of a single texture can’t exceed 2^27 elements.

In my application one array may be pretty huge but CUDA 2.X allowed to bind such a huge textures - errors occured only when the data was fetched (e.g. using tex1Dfetch) - I knew about this. Whereas fresh CUDA 3.X seems to “reject” even cudaBindTexture with too many elements.

Michał