Creating camera sensors for doing RL [revived thread]

Reviving same titled earlier thread.

The answer there basically say it will be supported. However, our projects still need to be done.

So, in the mean time, is it possible to use something like SyntheticDataHelper().get_groundtruth([‘rgb’, ‘depth’], viewport) to achieve the same effect of getting visual data from cameras, or should we just hold our breath for the next Isaac Sim release?

@kellyg, what would you advise?


So far, I’ve been able to get camera feed data following this Replicator example, using a custom Writer: Frequently Used Python Snippets — Omniverse Robotics documentation

Calling rep.orchestrator.step() seems to invoke all the RenderProduct rendering.

However, speed drops almost linearly as number of cameras increase, and each camera with 600x400 resolution takes about 0.5 to 1 GB of GPU memory (a lot more than necessary). Probably each RenderProduct has lots of overhead.

Also, I’m not sure how the images and the physics simulations are synchronized.

Looking forward to the new tensor API to get images from cameras in parallel for RL.

Is there an ETA for the new release?


Hi Le,

We have been working on improving the performance of camera APIs in replicator. You may already be able to see some performance gains in the latest IsaacSim 2022.2.0 release. We are hoping to have a RL example with vision early this year.

Looking forward to the new release @kellyg. According to our profiling of a quadruped, MuJoCo takes 1/20 the time for physics simulation as well as rendering. Would be great if Isaac can achieve a similar performance or even better!

(Ah, I didn’t realize 2022.2.0 is the new release. I thought it’s going to be 2023.1 or something…)

@kellyg Is there any existing code example for adding a camera feed into an RL example?

I ran into this error when trying to adapt from my working 2022.1 code change which adds unitree a1 vision robot to the cart_pole RL example. I fixed a bunch of other extension imports, but couldn’t go past this one:

File “/home/lezhao/.local/share/ov/pkg/isaac_sim-2022.2.0/exts/omni.isaac.quadruped/omni/isaac/quadruped/robots/”, line 161, in init
viewport_api = get_viewport_from_window_name(viewport_name)
File “/home/lezhao/.local/share/ov/pkg/isaac_sim-2022.2.0/kit/exts/omni.kit.viewport.utility/omni/kit/viewport/utility/”, line 69, in get_viewport_from_window_name
vp_iface = vp_legacy.get_viewport_interface()
File “/home/lezhao/.local/share/ov/pkg/isaac_sim-2022.2.0/kit/extscore/omni.kit.window.viewport/omni/kit/viewport_legacy/scripts/”, line 15, in get_viewport_interface
get_viewport_interface.viewport = acquire_viewport_interface()
RuntimeError: Failed to acquire interface: omni::kit::IViewport (pluginName: nullptr)

Any idea?


Maybe it’s because I’m using camera in headless mode (python script environment)?
This wasn’t a problem with the earlier version of Isaac Sim 2022.1.1.
I couldn’t seem to bypass this problem, as long as I need to use render product to capture the rendered image.

We do not currently have an example that uses camera sensors with RL, it’s something we are planning for early this year. I suspect the error you are seeing may be related to an update to the Viewport APIs in the latest IsaacSim release. Maybe some of the code snippets in this tutorial may help with working with Replicator and render products - 3. Offline Dataset Generation — Omniverse Robotics documentation.

Thanks for the reply @kellyg !

I compared my code with the example using camera_prim. It looks Ok. I followed the older multi-camera tutorial with custom writer tracking the render products: Frequently Used Python Snippets — Omniverse Robotics documentation

It might also be due to me using vglrun to remotely invoke the python script. (I’m seeing complaints of extension “GLX” missing on display “:5”. Again, it wasn’t a problem with the earlier version, so I’m not 100% sure.)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.