Where do I check the number of blocks, and threads that are available on Tesla C2050?

I am doing the online course on nVidia courses sight, on Accelerated computing CUDA.
I am using my own Tesla C2050/2075 card

When I write the basic code, I am supposed to create blocks and threads that run a global function.

Where can I find out about the maximum number of blocks and threads per block that I can allocate to run when using this card?

I found this
https://www.nvidia.com/docs/IO/43395/NV_DS_Tesla_C2050_C2070_jul10_lores.pdf
but I did not find the information here.

thanks

Check Robert Crovella’s answers here in this thread:
[url]https://devtalk.nvidia.com/default/topic/978550/cuda-programming-and-performance/maximum-number-of-threads-on-thread-block/[/url]

refer to the programming guide table 14 in any CUDA toolkit that supports that GPU, which is up through CUDA 8.0

The documentation for CUDA toolkits back to 8.0 is available online on the legacy toolkits page:

[url]https://developer.nvidia.com/cuda-toolkit-archive[/url]