Actually, NVIDIA GPUs utilize several techniques of quantum computing whereby the compute time in certain situations, such as the one you describe, scales by the square root of the increase in problem size. So: 6ms.
I believe that efficiency may cause the bigger image to be completed in less than 4*3ms.
The larger image may have relatively less overhead due to starting kernels, more live threads and blocks and more memory access pipelining, for example. It depends on the code design, of course.
Well you can try to go through it in detail, but the quantum uncertainty principles mean the details change as soon as you go through them. I believe it’s called quantum nda encryption.