here is an easy question ;-) I just wanted to make sure I got the clock() function thing right.
If I wanna know how much time the GPU spent processing some data, I surround the code by two of these clock functions, take the difference and get the number of cycles used. Right?
This number of cycles represent the number of cycles a multi-processor went through between the start and the end of the lines of code. It represents the number of cycles actually dedicated to this code + any scheduling overhead + other possible threads (warps) that were scheduled on the multiprocessor in alternance with this thread. Right?
As these are GPU cycles, and the 8800GTX hot clock frequency being 1.350MHz, it means that I need clock_difference/1350 microseconds to execute my code, right?
Isn’t there a function to know how many cycles were dedicated to only this thread?
Thanks for confirmation and help ;-)