How to filter the unoccluded object after using semantic annotation?

Hello everyone!

I have created a simulation for my objects and used an annotator to perform semantic or instance annotation. It works fine, but my objects are overlapping, and when I generate the semantic or instance annotation images, the overlapping is visible in the pictures. I need to fine-tune the “Segment Anything” model so that occluded objects are eliminated. Specifically, if an object is occluded by 10 to 20%, it should be kept, but if it’s occluded more than that, it should be removed.

Is there any post-processing step after generating the semantic or instance annotation images to keep unoccluded objects and remove occluded ones?

Here is the output from Issac Sim:

The expected output I want is:
The object above other objects should be kept, while the objects underneath should be removed. However, if the object above covers only about 10 to 20% of the object underneath, both objects should be kept.

Thank you for your help!

If you add the 2bd bounding box annotator, there is an attribute called occlusion ratio that you can filter by.

Hi there,

yes, here is info on the 2d or 3d bbox, both of which have the occlusion attribute:

Cheers,
Andrei

1 Like

To be able to achieve this, though, you will have to implement your own postprocessing routine. Likely, matching every instance against another using their segmentation masks and counting for intersecting pixels as you describe, to sift desired objects.

1 Like

You can also take a look at the PoseWriter (/isaacsim.replicator/python/scripts/writers/pose_writer.py), it has an attribute visibility_treshold, it might help you with a similar implementation for you case as well

1 Like

Thank you for that!
I actually need to do the pre-processing to eliminate the occluded objects first. Then, I can use the annotator for instance segmentation. The result from the instance segmentation will only show the segmentation of unoccluded objects.