I am currently training an RL agent using the input from a camera. At each step of the training, the camera is moved. Before getting the output from the camera, I call world.step(). However, I have noticed that the camera output does not always match the expected position of the camera. I believe this is due to the slow rendering speed, but assumed that world.step() would block until rendering is finished.
I then moved to rep.orchestrator.step(), which after running multiple tests by comparing images of moving the camera over the same path multiple times, seems to block until rendering is finished.
There are two issues with rep.orchestrator.step():
As training continues, iterations per second drops from 6 to 3.
At around 2500 steps, the output from the camera no longer updates (it is stuck on one image frame) even if the camera position changes (this behaviour continued over 10k steps before I quit it).
Are there any suggestions? And why does world.step() not block for one rendering_dt?
Here is the camera setup:
camera = Camera(
prim_path=f'/World/camera',
resolution=(res_width,res_height),
orientation = camera_orientation
)
camera.initialize()
# at each RL step
camera.set_world_pose(new_position)
rep.orchestrator.step() # or world.step(), or both
camera.get_current_frame()["rgba"][:,:,0:3]
Hi @replicator.user.123 - It seems like you’re encountering a couple of issues related to rendering and camera updates in your reinforcement learning (RL) simulation. Here are a few suggestions that might help:
Ensure the camera is updated before stepping the simulation: Make sure you’re updating the camera’s position before calling world.step() or rep.orchestrator.step(). If the camera’s position is updated after stepping the simulation, the rendered image might not reflect the new position.
Check if the simulation is running too fast: If the simulation is running faster than the rendering, the rendered image might not always reflect the current state of the simulation. You can try slowing down the simulation or synchronizing it with the rendering to ensure they’re in sync.
Use a blocking call to get the camera image: Instead of immediately getting the camera image after stepping the simulation, you can try using a blocking call that waits until the rendered image is ready. This might be a function like camera.get_current_frame(blocking=True).
Check for resource leaks or bottlenecks: The decreasing iterations per second and the camera getting stuck on one frame might indicate a resource leak or a bottleneck in your code. Check if there’s a part of your code that’s using up more and more resources over time, or if there’s a part that’s getting slower over time.
I have been using rep.orchestrator.step() after moving the camera position, which works well but is much slower than world.step(). I believe world.step() does not block properly for rendering, which is why I am unable to use it.
Where can I find resources to synchronize the simulation with the rendering?
How would I know when the rendered image is ready? I have checked the source code for rep.orchestrator.step() and it does some frame waiting in the background, would this be the same use case?
This is exactly the problem I’ve been having for a long time now and that I’ve seen many issues about.
The rendering time depends on the complexity of the scene for RTX, and (even if IDK about the orchestrator) for path tracing in how many times stage/world steps are being called. If the scene is complex and the “fps” of the viewport is higher than the rendering time, you are screwed. A solution I found is this one https://github.com/eliabntt/GRADE-RR/blob/main/simulator/utils/simulation_utils.py#L145 which takes into account both situations and manually do the multiple rendering steps required.
any update in isaac 4.0 or 2023 wrt this issue? replicator.orchestrator.step() does not work for us cz it seems to step through RL algo as well. But replicator.orchestrator.step(pause_timeline=False) does work! the param prevents the sim timeline from being triggered.
the camera_params_annotator outputs incorrect matrix. Since the rotation seems to be off when I perform pixel deprojection to points and transform them to world frame. @dennis.lynch@rthaker
Any idea what could be happening?