I am currently working on a project involving the generation of synthetic data for a pick and place robot ([2208.03963] MetaGraspNet: A Large-Scale Benchmark Dataset for Scene-Aware Ambidextrous Bin Picking via Physics-based Metaverse Synthesis). In this context, I drop various objects into a bin and aim to capture RGB, depth, and instance segmentation data from multiple camera positions for different scenes. So far, the process has been successful using the replicator.
However, I encountered challenges while trying to implement an additional feature. I wish to create scenarios where, for each camera position, only one of all the objects from the original scene, along with the bin, is visible. I also need to capture the corresponding instance segmentation data for these scenarios. Currently, I am using
rep.trigger.on_frame() to trigger changes in the camera position. While this approach works for changing the camera position, I am uncertain about how to execute additional functions, as described above, for each camera position in this context.
forumquestion.py (375 Bytes)
Is there a way to interrupt the replicator between two triggers, adjust the visibility of objects, and obtain the corresponding instance segmentation data for the desired scenarios? The goal is to get the visible and invisible mask of each object in that bin. The code snippet above shows how I imagined it could work, but it didn’t.
Thank you very much.