Image processing What to do with the boundary pixels of the image

Hello,

i’m working on image processing with CUDA and i’ve a doubt about pixels processing. What is often done with the boundary pixels of an image when applying a mxm convolution filter?.

I’ve tested in a 3x3 convolution kernel that ignore the 1px boundary is easier to implement (more when the code is enhanced with shared memory). I mean, you dont need to check if at a given pixel it has all the neigborhood available (i.e. pixel at coord (0,0) has not left, left-upper, upper neighbours). But removing 1px boundary of the original image could generate partial results. What do you usually do?. Of course, it depends on the type of problem (e.i. add two images has not this problem :P).

I’d like to process all the pixels within the image but when using shared memory improvements, overlap the load/store of the shared memory looks clear (at least for me) if you ignore 1px boundary. Do you usually use this approach?

Thanks in advance and best regards!