I’m in the process of helping a client evaluate whether Omniverse will work for them as a synthetic data generation platform. We have a pipeline built in Unity now, but are hoping to transition away from it to something more domain-specific. We’ve looked at other options like custom DCC-centric solutions (Blender / Houdini), but it looks like Replicator can meet most our needs and has a lot of promising features. I’m having trouble finding information on how to implement custom annotators, though. Specifically, we need the ability to generate far-depth annotations rather than near-depth, which is the typical default for camera distance / Z-depth based outputs.
I think any of the following options would work, but need to confirm if it’s possible:
- In camera distance output, disable double-sided geometry and only render back-faces
- Using MDL, raytrace ray hits to the far-distance of object rather than near and save to an AOV
- Have the Z-depth / camera distance shader prioritize farthest hit rather than closest hit
- If the point cloud annotator can output all surfaces along a viewing direction we can cull by dot product of surface normal / camera ray direction
- Maybe using the Raycast graph node? It looks like it doesn’t accept array inputs, though.
Regarding the potential to trace our back depth with MDL, there is a reference to a “nearest_hit” function in the following blog post:
Is that available from within Omniverse?
Any information about this would really help us decide whether to adopt, thanks!