I found that when launching a kernel that accesses a texture, the limit on the block size changes if the texture size changes.
Output from compiling the kernel:
ptxas info : Compiling entry function 'myKernel' for 'sm_21' ptxas info : Function properties for myKernel 0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads ptxas info : Used 51 registers, 144+0 bytes smem, 168 bytes cmem, 768 bytes cmem, 8 bytes cmem
When working on a 128x128x128 texture, it launches fine with a block size of (32, 20, 1), which was determined by register usage. However, when using a larger texture (256 or 512 in each dimension), I have to reduce the block size to (32, 18, 1) or else I get an “out of resources” error (from pycuda).
What are the factors I should consider when determining block size?
This is on Ubuntu 11.04, CUDA 4.0.17.
My device info:
Device 0: "GeForce GTX 560 Ti" CUDA Driver Version / Runtime Version 4.0 / 4.0 CUDA Capability Major/Minor version number: 2.1 Total amount of global memory: 2047 MBytes (2146631680 bytes) ( 8) Multiprocessors x (48) CUDA Cores/MP: 384 CUDA Cores GPU Clock Speed: 1.64 GHz Memory Clock rate: 2004.00 Mhz Memory Bus Width: 256-bit L2 Cache Size: 524288 bytes Max Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536,65535), 3D=(2048,2048,2048) Max Layered Texture Size (dim) x layers 1D=(16384) x 2048, 2D=(16384,16384) x 2048 Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 49152 bytes Total number of registers available per block: 32768 Warp size: 32 Maximum number of threads per block: 1024 Maximum sizes of each dimension of a block: 1024 x 1024 x 64 Maximum sizes of each dimension of a grid: 65535 x 65535 x 65535 Maximum memory pitch: 2147483647 bytes Texture alignment: 512 bytes Concurrent copy and execution: Yes with 1 copy engine(s) Run time limit on kernels: No Integrated GPU sharing Host Memory: No Support host page-locked memory mapping: Yes Concurrent kernel execution: Yes Alignment requirement for Surfaces: Yes Device has ECC support enabled: No Device is using TCC driver mode: No Device supports Unified Addressing (UVA): Yes Device PCI Bus ID / PCI location ID: 1 / 0 Compute Mode: < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >