Modelling depth cameras in Isaac sim


Is there any tutorial on how to simulate a depth camera / kinect (specifically RGB-D cameras) in the Isaac sim? So far I have found only information about regular cameras, lidars and radars.

Please try this document: 5. Replicator Composer — Omniverse Robotics documentation.

Thank you for the answer. I managed to get the depth data into both ROS and replicator.

I have a follow-up question: What is the realism of the generated depth data? Is there some option of adding noise models? Do they take into account properties such as surface material, or is the depth generated in a ground-truth mode, void of simulating any physical properties?

Also, it seems that the generation of data is still tied to the viewports as discussed here Reducing GPU memory usage for multi-camera uses Do you know, if there is some timeline about this? For instance, we often do not use all the cameras at the same time, thus it is useless to be always generating the data.