Your best bet would probably be to convolve your image with an appropriately sized Gaussian kernel, bind it to a texture, and then subsample it. Look up Gaussian Pyramids for more information on this process in general. The gist of the idea is that by blurring the image with the Gaussian kernel, your are spreading the information out. For example, by blurring the image with a Gaussian with sigma = 2, you should then be able to simply take every other pixel and have a scale reduction of 2 without actually losing any more information than is strictly necessary. This, however, could be completely wrong.
As Simon Green said, simply subsampling the image will not work (depending on your application) as much information will be lost (even more than you would expect for a reduction in scale).
From what I’ve seen in the sample projects, textures are created as globals and wouldn’t need to be passed as a function parameter. They are also read only and like Simon Green said, provide linear filtering between pixel values and several other useful features. I recommend reading up on them in the programming guide.