Maximum number of instructions per kernel

Hi,
first of all sorry for my bad english.
I’ve just started learning about CUDA and i read (on this page: Programming Guide :: CUDA Toolkit Documentation) information that
“Maximum number of instructions per kernel” is 2 or 512 mln.
Could anyone tell me what does it mean?

For example if i have over 512 mln data on GPU and i want to each thread do something on all of this data (For example i want to count: sum += data[i]*threadIdx.x;) there will be error, because there will be over 512 mln instructions?

instructions per kernel is something like lines of code. It does not matter how many threads are executing that instruction in parallel, it is one instruction. This limit mainly pertains to the size of the compiled code that can be passed to the GPU. It does not have anything to do with execution of the code.

So you are unlikely to run into this limit in “normal” or beginner programming.

thanks a lot!