High level what I would like to do is train a model to detect that a chunk of material is missing from an object.
In Omniverse terms I believe this fundamentally comes down to whether a prim can be invisible to the RGB renderer while visible to the InstanceSegmentation renderer. I am using Omniverse Replicator in Isaac Sim, and I have been digging around in the code to better understand how the renderer works, but I can’t seem to find a solution to this. I wish I could select which prims were visible to the RGB renderer and which prims were visible to the InstanceSegmentation renderer, or hit pause between the two renders but if I understand the code correctly it looks like the two renders happen simultaneously. I also noticed that if opacity is turned down to zero the prim is invisible to both renderers.
I made a picture of roughly what I’m looking for (the bottom image is a depth image rather than RGB):
@danielle.sisserman that is a good idea in fact I tried it, but if you set opacity on a prim or material all the way to zero it is invisible to both the RGB and segmentation. Unless there has been an update, since I last tried it.
If this does not work (maybe because your asset is fully transparent), you can do a 2-step capture by toggling the “transparent” asset invisible in-between. This way the first capture will get you the instance segmentation, and the second the rgb data. For this you can use the get_data function on the annotators:
Hi @ahaidu I really appreciate the documentation you sent over I found it helpful but really I want to do the opposite. In that example we have something that is invisible to the segmentation but visible to RGB, I want something to be visible to segmentation but not visible to RGB so as to highlight the absence of material.
Hi @ahaidu, I tried the example you gave me “Get synthetic data at custom events in simulations”. This was very helpful but in this example when the renderer is called in rep.orchestrator.step it runs all annotators. I cannot specify to run the RGB or the segmentation. I modified the example script to make the cube invisible after sem_annot.get_data() but before rgb_annot.get_data() but these functions are merely getters and do not actually control rendering so the cube was visible in both rgb and seg. One of the major reasons I want to separate these renders is because the RGB is very expensive with path tracing and I don’t want to call it twice as much as is required.
Do you have any other ideas? I really appreciate your help @ahaidu!
Thank you @ahaidu for going above and beyond to help me out here! This is pretty much exactly what I was asking for! Last thing I promise, could you please explain to me if the RGB and seg render is happening in both “await rep.orchestrator.step_async(rt_subframes=8)” steps but we are only choosing to write the seg or RGB? And if so could I fully turn off the RGB or at least tell it to do minimal rendering, for example if I am using path tracing with many samples could I tell it to use 1 sample per pixel for the first step render and then 64 samples for the second step render?
AFAIK, yes, both annotators are active and processing the data, however only one ends up being copied to CPU mem (in on the GPU) and written to disk.
Regarding path tracing, you can dynamically change the number of samples, or dynamically switch between rtx or pathtracing, and use path tracing only before collecting the rgb: