How to Access Ground Truth Environment Geometry in IsaacLab
Hi NVIDIA Team,
I’m currently working on a deep learning project using IsaacLab, and I want to generate training data for a scene reconstruction model. The idea is to use sensor data (RGB, depth, LiDAR) from a robot (e.g., ANYmal-C) as input, and train a model to reconstruct the surrounding terrain or environment in 3D.
✅ My Setup:
- IsaacLab with
TerrainImporterCfgto procedurally generate terrains (e.g., rough terrain). - A robot equipped with multiple cameras or raycasters.
- I want to pair sensor data with ground-truth 3D geometry from the environment.
❓ My Key Question:
What is the proper or recommended way to directly access the terrain/environment geometry (i.e., full 3D point cloud or mesh) for ground truth labeling?
📌 Specifically:
- If terrain is spawned via
TerrainImporterCfg, is it expected to be aMeshprim under a known path like/World/ground? - Are there tools/utilities in IsaacLab or Isaac Sim to extract the mesh vertices from the terrain (in world coordinates)?
- If the terrain or other objects are not
Meshtypes, is there a recommended way to convert or sample them for GT data?
👀 What I’ve Tried:
- Traversing the USD stage to find
Meshprims. - Using
UsdGeom.Mesh(...).GetPointsAttr().Get()and transforming to world coordinates. - Found that many scene objects are
XformorConetypes, which are not directly sampleable. - Sometimes terrain prims don’t appear at all in the stage (e.g.,
/World/groundis missing).
🎯 My Goal:
For every simulation frame, generate a pair:
- Input: Sensor data from the robot (e.g., RGB, depth).
- Ground Truth: Accurate 3D point cloud of the environment near the robot.
What’s the best practice for this in IsaacLab?
Thank you so much in advance — any pointers or sample code would be appreciated!