I want to perform semantic segmentation on images projected through projection mapping. I am experimenting with various images, but there are images for which semantic segmentation is successful and others for which it is not.
It works well with images of smileys, but it seems to be unsuccessful with images other than smileys.
It seems the second material is projected on the whole plane, that is why the whole plane is semantically annotated (blue). If you for example remove the semantics from the cube it will disappear.
Is there a way to narrow down semantic segmentation only to the area of projection mapping, rather than projecting onto the entire plane?
I’m considering whether this example could be used for training appearance inspection of scratches on cars.
By the way, I was able to successfully perform semantic segmentation with the bird icon in the following link.
Is it important for the image used in projection mapping to be 512x512 pixels?
I believe the issue you’re running into is that the alpha (transparency) is taken from the diffuse/color image, and that some of the image you’re using don’t have an alpha channel. The png of the bird looks like it does though. I’ll make this requirement a bit more visible in the docs.
Question, what’s the preferred method ? Packing the alpha into the color image is about the closest thing to a “standard” for image transparency, but an optional input for a separate opacity might be nice to have (though the channel packing into the colormap is more efficient).