Where do I check the number of blocks, and threads that are available on Tesla C2050?

I am doing the online course on nVidia courses sight, on Accelerated computing CUDA.
I am using my own Tesla C2050/2075 card

When I write the basic code, I am supposed to create blocks and threads that run a global function.

Where can I find out about the maximum number of blocks and threads per block that I can allocate to run when using this card?

I found this

but I did not find the information here.


Check Robert Crovella’s answers here in this thread:

refer to the programming guide table 14 in any CUDA toolkit that supports that GPU, which is up through CUDA 8.0

The documentation for CUDA toolkits back to 8.0 is available online on the legacy toolkits page: