I am trying to train a model in Isaac Lab that is an oracle (i.e., somehow gets the ground truth from the simulator rather than perceptive inputs). In particular, I am trying to figure out how, given a mesh or object, to pass the entire point cloud (or some other equivalent “complete representation”) as an observation. Any ideas of how to do this? I cannot find anything that does this in the example code. Thanks!
Thanks for the quick reply! I realize I didn’t quite phrase my question that clearly.
It seems the tools available in Lab and Replicator (some of which you cited) are for producing data from the perspective of a particular sensor (e.g., the point cloud of an object as seen by a camera). What I am instead trying to do is obtain the complete point cloud (or some other equivalent representation of the entire object). Is there any way to do this natively? As I see it, the options are:
Place cameras in different positions, sample point clouds from each, and do sensor fusion (not great).
Use some external library to compute the point cloud given an object file (also won’t work well).
So to summarize:
Am I missing any tooling or something in the documentation that does what I am describing?
Is there any way to query the Sim itself? Surely it has some “under the hood” notion of where different points are in space (this skips the need for point clouds and is exactly such an “equivalent representation”) ?
For future reference and to ensure efficient support and collaboration, please submit your topic to its GitHub repo following the instructions provided on Isaac Lab’s Contributing Guidelines regarding discussions, submitting issues, feature requests, and contributing to the project.
We appreciate your understanding and look forward to assisting you.