Increased resolution of SegNet class mask to improve data isolation

Hello,

I’m trying to use the output of SegNet class masks to isolate specific parts of an image to perform further analysis by ignoring regions of the image where certain classes are detected (e.g. only analyze regions where walls/floors are not detected). To do so, I’m planning to use the class mask to track detected classes, mark regions detected as ‘blacklisted’ classes, and tell the second round of processing to ignore those regions.

My concern is this: by default, using [grid_width , grid_height = net.GetGridSize()], I get a class mask that’s 13x16. My thought was to upscale that mask to my image size (224x224) and do a pixel-by-pixel check to eliminate pixels containing blacklisted classes. However, I read in another thread that this upscale doesn’t actually change the analysis and is… well, an upscale. Without a lot of knowledge of the workings of that process, I’m hoping to find if I’ll lose a significant amount of accuracy in upscaling the mask. Any guidance on this problem is much appreciated.

Thanks!

Hi @sarobando, the grid size is a function of the input size that the segmentation model was trained with (during pre-processing the input images automatically get resized to match the model’s expected input size)

If you want to check class ID’s, then upscaling the grid doesn’t inherently add (or lose) more data. Upscaling with bilinear filtering does soften the boundaries between classes, but that is used mostly with the colors for overlay and visualization. You can upsample the grid to mask your original image so that it is aligned.

Excellent. Thanks for the additional info!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.