Segnet Blobs

Not sure if I am using the right terminology here…
Is there a way of getting a list of the individual blobs from the segmentation cityscape output?
For instance, a list of the blobs that are humans?

You would need to perform some blobbing or connected-component clustering on the segmentation mask in a post-processing stage. The raw output of the segmentation network doesn’t perform the blobbing.

To reduce the processing needed, you could perform the blobbing or connected component on the raw binary segmentation grid, which is typically in a lower resolution than the input image (see segNet::GetGridWidth() and segNet::GetGridHeight())

Hi Dusty,

Thanks for that.
I am not sure how to find the output from the net.Process() call in segnet.py and then how to traverse the output. I am assuming it is an array (buffers.overlay) with an index to each of the classification types.
Do you have an example of how to do this?

Thanks,
Bruce

Hi @brucergirdlestone, see the code that implements the --stats processings in the segnet.py example (this gets the uint8 class ID grid array)

Namely, if you pass a gray8 image to net.Mask(), it will give you the raw classification grid back, where each pixel is the class ID. You can then iterate over that image like an array (i.e. mask[y,x]) or as a numpy array (see here for more info).

When working with the raw classification grid, to reduce the processing it’s recommended to use it’s original size (net.GetGridSize()) as opposed to the size of your input image. The original size of the classification grid is what the network actually outputs. For the colorized overlay, it gets upsampled/filtered back to the size of the input image.

Awesome! Thanks.