Hi, I’m trying to use isaacgym for training a vision-conditioned policy that requires rendering a depth image for each environment. I find that there’s a significant impact in the sample efficiency. Below are some relevant numbers with tests done on an 1080 card with 2048 envs where the camera is looking at a flat plane:
no depth rendering: 22100 steps/second
with depth rendering at 50Hz in sim: 1380 steps/second
I’m wondering if this is something expected and if there’re plans to make the rendering part faster?