Exceeding number of threads/block in a kernel

Folks,
I am new to CUDA so please forgive me if my question is silly.

I am running some basic program and evaluate the execution time of a kernel though events and cudaEventElapsedTime.
This is what I do:


cudaEventRecord(start,0);
square_array <<<1,block_size>>> (A,N);
cudaEventRecord(stop,0);

cudaEventElapsedTime(&time,start,stop);

I have NVIDIA GeForce GT120, which supports up to 512 threads per block (block_size).
I wanna know if the program gives some warning/error codes, other than not executing the kernel, when I set values for block size like 1024, 2048, 4096.

Thanks for your help

No, it doesn’t. But [font=“Courier New”]cudaGetLastError()[/font] would tell you. Or use [font=“Courier New”]cudaGetErrorString(cudaGetLastError())[/font] to get human readable output.