How to render the absence of a prim/mesh or part of a prim/mesh?

Hi everybody,

High level what I would like to do is train a model to detect that a chunk of material is missing from an object.

In Omniverse terms I believe this fundamentally comes down to whether a prim can be invisible to the RGB renderer while visible to the InstanceSegmentation renderer. I am using Omniverse Replicator in Isaac Sim, and I have been digging around in the code to better understand how the renderer works, but I can’t seem to find a solution to this. I wish I could select which prims were visible to the RGB renderer and which prims were visible to the InstanceSegmentation renderer, or hit pause between the two renders but if I understand the code correctly it looks like the two renders happen simultaneously. I also noticed that if opacity is turned down to zero the prim is invisible to both renderers.

I made a picture of roughly what I’m looking for (the bottom image is a depth image rather than RGB):

Any help would be greatly appreciated!

@schneeweiss just a thought, why not just give the object a transparent material?

it will still show in the segmentation since its labeled

@danielle.sisserman that is a good idea in fact I tried it, but if you set opacity on a prim or material all the way to zero it is invisible to both the RGB and segmentation. Unless there has been an update, since I last tried it.

But I appreciate your input!

Hi there,

see if this transparent materials usage in replicator helps, basically by setting cast shadows to on it should appear in the instance segmentation:

If this does not work (maybe because your asset is fully transparent), you can do a 2-step capture by toggling the “transparent” asset invisible in-between. This way the first capture will get you the instance segmentation, and the second the rgb data. For this you can use the get_data function on the annotators:

Let me know how it goes.

Best,
Andrei

Hi @ahaidu I really appreciate the documentation you sent over I found it helpful but really I want to do the opposite. In that example we have something that is invisible to the segmentation but visible to RGB, I want something to be visible to segmentation but not visible to RGB so as to highlight the absence of material.

The second half of your response with the 2-step capture looks more promising I will do some tests and let you know what I find!

Hi @ahaidu, I tried the example you gave me “Get synthetic data at custom events in simulations”. This was very helpful but in this example when the renderer is called in rep.orchestrator.step it runs all annotators. I cannot specify to run the RGB or the segmentation. I modified the example script to make the cube invisible after sem_annot.get_data() but before rgb_annot.get_data() but these functions are merely getters and do not actually control rendering so the cube was visible in both rgb and seg. One of the major reasons I want to separate these renders is because the RGB is very expensive with path tracing and I don’t want to call it twice as much as is required.

Do you have any other ideas? I really appreciate your help @ahaidu!

Here is an example using writers or annotators to access the specific data at the specific states:

import asyncio
import os
import omni.usd
from omni.isaac.core.utils.semantics import add_update_semantics
import omni.replicator.core as rep
from pxr import UsdGeom

stage = omni.usd.get_context().get_stage()

sphere = stage.DefinePrim("/World/sphere", "Sphere")
UsdGeom.Xformable(sphere).AddTranslateOp().Set((1, 1, 1))
add_update_semantics(sphere, "sphere")

cube = stage.DefinePrim("/World/cube", "Cube")
UsdGeom.Xformable(cube).AddTranslateOp().Set((0.2, 0.2, 0.0))
add_update_semantics(cube, "cube")

rp = rep.create.render_product("/OmniverseKit_Persp", (512, 512))
rgb_annot = rep.annotators.get("rgb")
rgb_annot.attach(rp)
seg_annot = rep.annotators.get("instance_segmentation", init_params={"colorize": True})
seg_annot.attach(rp)

out_dir = os.getcwd() + "/_out_visibility"
print(f"Writing data to {out_dir}")
rgb_writer = rep.writers.get(name="BasicWriter")
rgb_writer.initialize(output_dir=out_dir, rgb=True)
rgb_writer.attach(rp, trigger=None)

seg_writer = rep.writers.get(name="BasicWriter")
seg_writer.initialize(output_dir=out_dir, instance_segmentation=True, colorize_instance_segmentation=True)
seg_writer.attach(rp, trigger=None)

async def run_example_async():
    await rep.orchestrator.preview_async()

    print(f"Both prims are visible, saving rgb..")
    seg_writer.schedule_write()
    await rep.orchestrator.step_async(rt_subframes=8)
    rgb_data = rgb_annot.get_data()
    print(f"rgb_data shape: {rgb_data.shape}")
    ss_data = seg_annot.get_data()
    print(f"ss_data info: {ss_data['info']}")

    print(f"One prim is visible, saving semantic segmentation..")    
    imageable_sphere = UsdGeom.Imageable(sphere)
    imageable_sphere.MakeInvisible()

    rgb_writer.schedule_write()
    await rep.orchestrator.step_async(rt_subframes=8)
    rgb_data = rgb_annot.get_data()
    print(f"rgb_data shape: {rgb_data.shape}")
    ss_data = seg_annot.get_data()
    print(f"ss_data info: {ss_data['info']}")

    print(f"Both prims are visible again.")
    imageable_sphere.MakeVisible()


asyncio.ensure_future(run_example_async())

Thank you @ahaidu for going above and beyond to help me out here! This is pretty much exactly what I was asking for! Last thing I promise, could you please explain to me if the RGB and seg render is happening in both “await rep.orchestrator.step_async(rt_subframes=8)” steps but we are only choosing to write the seg or RGB? And if so could I fully turn off the RGB or at least tell it to do minimal rendering, for example if I am using path tracing with many samples could I tell it to use 1 sample per pixel for the first step render and then 64 samples for the second step render?

AFAIK, yes, both annotators are active and processing the data, however only one ends up being copied to CPU mem (in on the GPU) and written to disk.

Regarding path tracing, you can dynamically change the number of samples, or dynamically switch between rtx or pathtracing, and use path tracing only before collecting the rgb:

using replicator helper functions:

rep.settings.set_render_rtx_realtime()
rep.settings.set_render_pathtraced()

or manually changing the settings:

import carb.settings

carb.settings.get_settings().set("/rtx/rendermode", "PathTracing")
carb.settings.get_settings().set("/rtx/pathtracing/spp", 64)
carb.settings.get_settings().set("/rtx/pathtracing/totalSpp", 64)
carb.settings.get_settings().set("/rtx/pathtracing/clampSpp", 64)
carb.settings.get_settings().set("/rtx/pathtracing/optixDenoiser/enabled", 0)


carb.settings.get_settings().set("/rtx/rendermode", "RayTracedLighting")
# 0: Disabled, 1: TAA, 2: FXAA, 3: DLSS, 4:RTXAA
carb.settings.get_settings().set("/rtx/post/aa/op", 2) 
1 Like

Thank you @ahaidu, this has been very enlightening!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.