This is a question if this type of code would benifit from CUDA/GPU Computing.
I am working on a project for 3D reconstruction: [url=“http://rp181.fortscribe.com/?cat=108”]http://rp181.fortscribe.com/?cat=108[/url]
And the majority of the time taken to analyze the picture is finding the Sum of the Absolute Differences of different sized blocks. This is simply:
int totalDiff = 0;
loop x amount of times
loop y amount of time
int rdiff = abolute(r2 - r1)
int gdiff = absolute(g2 - g1)
in bdiff = absoluteb2 - b1)
totalDiff = totalDiff + rdiff + gdiff + bdiff
end loop
end loop
(this isn’t real code, really in java)
This is called numerous times (((imageWidth/resolution)*(imageHeight/resolution)) for an image, where x and y are resolution.
I’ve never done CUDA before, is this appropriate, and give a sizable increase of speed? I only have a 9500GT (1GB, DDR2), would this beat a dual core 2 gHz computer? What happens when vRam runs out, does it swap with normal RAM?
Thanks, rp181