Is there any way I can obtain the groundtruth instance segmentation mask of each individual object (including amodal and occluded masks) in a rendered scene of a camera in isaac sim?
Currently, the API only provides sensors such as instance segmentation, depth, RGB etc but there is no sensor that can extract the groundtruth amodal and occluded masks of USD objects in a rendered scene.
How can I create my own custom sensor for this use case of generating amodal and occluded instance segmentation masks for each object in each rendered scene?
Is it possible to hide objects (reduce opacity to 0) for each scene and obtain the instance segmentation mask for each object in the scene using replicator composer?
I am currently looking into the possibility of accessing amodal and occlusion mask data from the occlusion calculation, I will get back to you as soon as I have an answer. If that however is currently not possible a workaround would be as you mentioned to hide objects in the scene, read out the instance segmentation and then compare it with the original visible scene.
UPDATE:
It is currently not possible to get amodal and occlusion segementation mask, it is however something that is actively investigated to provide it as default annotators.
I would like to ask how can I iteratively hide objects in each scene so that I can iteratively read and save the instance segmentation mask for each object for the same scene using replicator composer.
Can this be done by editing the replicator composer yaml config or the python api extension source codes directly?
UPDATE:
It is currently not possible to get amodal and occlusion segementation mask, it is however something that is actively investigated to provide it as default annotators.
As a current workaround you can change the visibility of prims using the python isaac or usd api: