about the CUDA performance

hi ~ everyone ! I have a question :)

we assume a situation …

If I apply the CUDA to process a Image (800 * 600) ,
the time costs about 3ms …

If I apply the same CUDA to process another large picture (1600*1200)…
the time must costs about 12ms? (because the pic size has 4 times difference…)

About this inference … is it correct ?

thanks~! :D

Actually, NVIDIA GPUs utilize several techniques of quantum computing whereby the compute time in certain situations, such as the one you describe, scales by the square root of the increase in problem size. So: 6ms.


I believe that efficiency may cause the bigger image to be completed in less than 4*3ms.

The larger image may have relatively less overhead due to starting kernels, more live threads and blocks and more memory access pipelining, for example. It depends on the code design, of course.

2 cents out of my experience:

If your 800*600 had COMPLETELY saturated the GPU then , time-taken would just linearly scale as your input size increases.

If not, it will appear to be quicker for bigger inputs before it starts saturating and showing linear property…


can you give me the source where you found out about the ‘quantum computing’ techniques? I just need to go through it in detail…

thanks :)

I think you will find something here: http://encyclopediadramatica.com/Main_Page

Well you can try to go through it in detail, but the quantum uncertainty principles mean the details change as soon as you go through them. I believe it’s called quantum nda encryption.