Question about rendering speed for depth camera sensor

Hi, I’m trying to use isaacgym for training a vision-conditioned policy that requires rendering a depth image for each environment. I find that there’s a significant impact in the sample efficiency. Below are some relevant numbers with tests done on an 1080 card with 2048 envs where the camera is looking at a flat plane:
no depth rendering: 22100 steps/second
with depth rendering at 50Hz in sim: 1380 steps/second

I’m wondering if this is something expected and if there’re plans to make the rendering part faster?

Thank you!

Hi @stacormed,

Yes, I’m afraid that this is expected. We do hope to improve this with future versions of Isaac Sim, with Gym capabilities integrated, but it’s unlikely that we’ll be able to improve it with the Standalone Isaac Gym Preview.

Currently, any rendering requires physics readback from the GPU to the CPU, which slows things down significantly.

More info on the eventual cure for this problem here: Physics Core — Omniverse Extensions documentation

Take care,
-Gav