I have created a simulation for my objects and used an annotator to perform semantic or instance annotation. It works fine, but my objects are overlapping, and when I generate the semantic or instance annotation images, the overlapping is visible in the pictures. I need to fine-tune the “Segment Anything” model so that occluded objects are eliminated. Specifically, if an object is occluded by 10 to 20%, it should be kept, but if it’s occluded more than that, it should be removed.
Is there any post-processing step after generating the semantic or instance annotation images to keep unoccluded objects and remove occluded ones?
The expected output I want is:
The object above other objects should be kept, while the objects underneath should be removed. However, if the object above covers only about 10 to 20% of the object underneath, both objects should be kept.
To be able to achieve this, though, you will have to implement your own postprocessing routine. Likely, matching every instance against another using their segmentation masks and counting for intersecting pixels as you describe, to sift desired objects.
You can also take a look at the PoseWriter (/isaacsim.replicator/python/scripts/writers/pose_writer.py), it has an attribute visibility_treshold, it might help you with a similar implementation for you case as well
Thank you for that!
I actually need to do the pre-processing to eliminate the occluded objects first. Then, I can use the annotator for instance segmentation. The result from the instance segmentation will only show the segmentation of unoccluded objects.