From driver_types.h, we have the following for cudaDeviceProp
int maxTexture1D; ///< Maximum 1D texture size
int maxTexture2D[2]; ///< Maximum 2D texture dimensions
int maxTexture3D[3]; ///< Maximum 3D texture dimensions
int maxTexture2DArray[3]; ///< Maximum 2D texture array dimensions
Outside of these comments, these do not appear to be documented anywhere else (at least a search doesn’t show them in in the programming guide - they do appear in the reference guide where the doxygen generated lines from these comments are placed).
From its similarity to maxTexture3D, one assumes that maxTexture2D defines the maximum dimensions of a 2D texture bound to a cudaArray. This checks out, as the value is [65536 32768] on my compute 1.1 machine which matches up with the value in the programming guide.
What is curious (and confusing) is that there is a maxTexture2DArray, and it has 3 indices! On my compute 1.1 card, it is [8192 8192 512]. At first, I confused it with 2d textures bound to a cudaArray (and thus maxTexture2D must have been what you can bind with cudaBindTexture2D), but the values are too small and 2D textures do not have 3 indices.
CUDA by example sheds a little light on it: “The maximum dimensions supported for 2D texture arrays” Hmmm… CUDA doesn’t have texture arrays yet - maybe this is a hint that they are on their way?