Custom far-depth Annotator?

I’m in the process of helping a client evaluate whether Omniverse will work for them as a synthetic data generation platform. We have a pipeline built in Unity now, but are hoping to transition away from it to something more domain-specific. We’ve looked at other options like custom DCC-centric solutions (Blender / Houdini), but it looks like Replicator can meet most our needs and has a lot of promising features. I’m having trouble finding information on how to implement custom annotators, though. Specifically, we need the ability to generate far-depth annotations rather than near-depth, which is the typical default for camera distance / Z-depth based outputs.

I think any of the following options would work, but need to confirm if it’s possible:

  • In camera distance output, disable double-sided geometry and only render back-faces
  • Using MDL, raytrace ray hits to the far-distance of object rather than near and save to an AOV
  • Have the Z-depth / camera distance shader prioritize farthest hit rather than closest hit
  • If the point cloud annotator can output all surfaces along a viewing direction we can cull by dot product of surface normal / camera ray direction
  • Maybe using the Raycast graph node? It looks like it doesn’t accept array inputs, though.

Regarding the potential to trace our back depth with MDL, there is a reference to a “nearest_hit” function in the following blog post:

Is that available from within Omniverse?

Any information about this would really help us decide whether to adopt, thanks!

1 Like

After investigating the MDL option further it looks like the nearest_hit function I referred to is just a custom toy intersection function implemented in the example code and is not part of MDL itself; it just deals with simple primitives in the example.

So that leaves the other options available. Still hoping to hear from devs on whether this is possible to achieve in Omniverse.

@j_prkr I’ve asked the MDL team on the MDL suggestions. Here’s the response:

“Potential tricks with different transparency settings for front and back faces were disallowed by the MDL compiler in the past, because that is not a physically plausible material and breaks light transport algorithms which require symmetry”

It sounds like the MDL back/front faces approach isn’t feasible. I’ll push the other ideas in front of some others here and see what comes back.

Thanks for confirming MDL isn’t a feasible approach.

One other way to do this when I have control of ray directions is to project my NDC ray origin coordinates forward far enough to be completely behind my target object, then trace the rays back towards the camera. This wouldn’t break any physical laws in the shaders, but this isn’t possible because there is no flexibility with the camera rays, correct?

Any other alternatives in Omniverse?

Let’s assume I use the Python Open3D library to create my own back-depth annotator. What is the recommended way of implementing this?