Dear All,
I am new to GPU computing and for the start-up, i am studying “Programming Massively Parallel Processor” book. I was going smoothly but now i am stuck with the concept of Thread Scheduling. As far i studied SM have 8 SPs, my device is GeForce-310. After checking its device query i found that it has 2Multiprocessors x 8 cuda cores. I studied on some other posts that cuda cores are SPs (not sure about this, please reply for this also). Then i studied that warp size is 32 threads, what they mean by warp??.
In the book they give examples of GT200 that it can have only 1024 threads per SM, for my device case 512 threads can be accommodated in single block so it means in my device SM total 512(threads) x 8(blocks)= 4096 threads can be accommodated?? out of which 32 (warp size) threads will execute single instructions??
Please response tot his question so that i can proceed further.
Waiting for favorable response.
Thanks