Image Processing change image size Does anybody know a quick and easy way to change the size of an i

Hi,

I am doing some imageprocessing, and therefore I have to change the size of an image from time to time. does anybody know if Cuda offers an quick way to do this. Please keep in mi8nd that the scale factor is a float value and not integer, so for example: you have an image of 512256 pixel. if you have an scalefactor of 2.83 the new image has 18191 pixel.

Texture lookups in CUDA provide a simple way to do image transforms including scaling, although you only get linear filtering (which may not be ideal for large reductions in scale). Take a look at the simpleTexture sample in the SDK.

Could you be a bit more spezific please? I have following questions:

  1. can you hand a texture reference as a function parameter?
  2. How should a pice of code look that changes resolution and takes advantage of cuda in build possibilitys?

Your best bet would probably be to convolve your image with an appropriately sized Gaussian kernel, bind it to a texture, and then subsample it. Look up Gaussian Pyramids for more information on this process in general. The gist of the idea is that by blurring the image with the Gaussian kernel, your are spreading the information out. For example, by blurring the image with a Gaussian with sigma = 2, you should then be able to simply take every other pixel and have a scale reduction of 2 without actually losing any more information than is strictly necessary. This, however, could be completely wrong.

As Simon Green said, simply subsampling the image will not work (depending on your application) as much information will be lost (even more than you would expect for a reduction in scale).

From what I’ve seen in the sample projects, textures are created as globals and wouldn’t need to be passed as a function parameter. They are also read only and like Simon Green said, provide linear filtering between pixel values and several other useful features. I recommend reading up on them in the programming guide.

thanks for your answer, allthough it is not exactly what i am looking for. to be more presice. i understand how linear and nearest point sampling works. i would like to implement something that does linear filtering, but does not only use the nearest 4 pixel.

example:lets say you have a scale factor of 3.4 each new pixel should use 3.4*3.4 pixel of the origin image to calculate the new pixel, that is preety much what figure D3 in apendix E§ of the programming guide 2.3 shows, but i have no idea how to write the code exactly.

That’s why I recommended filtering the image with an appropriate Gaussian first. That should spread the image data out so that your linear interpolation downsizing will get you a good result.