How to Increase omnigraph publish rate

I run my robot with tow fisheye len camera and set render resolution 1920x1080. But the publisher rate is less than 10hz.
I would like to ask:

  1. Are there any specific measures for isaacsim2022.1 to reduce GPU memory usage? The new version seems to take up more GPU memory than before. I used Omnigraph to build two cameras and publish rgb images. On the 3070ti machine, it will crash due to insufficient memory; I tried to reduce the FPS limit and modify the Min Simulation Frame Rate, But the effect is not obvious, it will still cause crash.

  2. Is there any way to improve the publisher rate of odom/camera built by omnigraph? It seems that the camera viewport will be affected by the GPU memory then reduce the publish rate of the image (even if it runs on 3080ti, it does not exceed 10hz, and it is the same after trying the official navigation demo when opening two cameras’ viewport on ); other publisher rates are also reduced consistent with camera, such as odom , tf. Is there any way to improve the Publisher rate creating from omnigraph? Or decouple other publishers from camerahelper’s publishing and use different publishing frequencies?

The above problems may also be caused by incorrect personal use; help is greatly appreciated!

1 Like

@hmazhar

From what I understood the publish rate (especially of the camera) is completely linked to the render speed. But there is no way (at least I was not able to find one) to force that to physics step/decouple the render and ros rates.

Do you have a test what is the max render speed? ? when I increase it, the time in Isaac sim will slow down.

We are working on adding better tutorials for how to control the rates at which ROS cameras can be published, these will be available soon.

The main issue is that rendering happens asynchronously and a frame can finish rendering after a step has occurred, sometimes two frames finish rendering at once etc. Because of this the ROS image publisher is tied to a frame being ready from the renderer, but this has its limitations in terms of controlling the ROS image publishing rate.

In general there are a two methods:

  • Step physics without rendering, only render when you want to publish a frame
  • The post process render pipeline where the ROS publisher is created has a SimulationGate node, the step=N value for this node will skip every N frame when publishing.

for example if you have a scene, you can run the following python code to change the step to publish data every other frame.

import omni.graph.core as og
import omni.kit.viewport_legacy
step_size = 2
viewport_interface = omni.kit.viewport_legacy.get_viewport_interface()
viewport = viewport_interface.get_viewport_window()

if viewport is not None:
    import omni.syntheticdata._syntheticdata as sd

    rv = omni.syntheticdata.SyntheticData.convert_sensor_type_to_rendervar(sd.SensorType.Rgb.name)
    rgb_camera_gate_path = omni.syntheticdata.SyntheticData._get_node_path(
        rv + "IsaacSimulationGate", viewport.get_render_product_path()
    )

    camera_info_gate_path = omni.syntheticdata.SyntheticData._get_node_path(
        "PostProcessDispatch" + "IsaacSimulationGate", viewport.get_render_product_path()
    )
    og.Controller.attribute(rgb_camera_gate_path + ".inputs:step").set(step_size)
    og.Controller.attribute(camera_info_gate_path + ".inputs:step").set(step_size)

If more control is needed we can look into ways to “pause” the post process graph and only run on specific frames. I need to discuss if this is possible with the OmniGraph team.

We are also working on ways to group OmniGraph nodes together so the user can defined the post process pipeline themselves instead of how its autogenerated currently. This was done primarily because of how complex things can get when publishing Instance/Semantic information. This is something we are looking into for the next major kit-sdk update.

Hi @Hammad_M . The problem I’m seeing here is that this is applicable only with path tracing when I know I need to render N frames before all I get all the spp.

For complex scenes and rtx a single frame might just not be enough since the .render() updates the scene in the render_dt setting speed. If the render takes longer (suppose a complex scene or very hard settings) it just doesn’t give enough time to have the complete render published but just a partial one.

The main issue, at least from what I’ve seen, is that render seem tied to render_dt rather than to spp or to the actual time it takes to render completely a single frame (on rtx term the actual FPS).

A pausing mechanism IMO would be appreciated. With that at least we would be able to use something similar to the rosbridge tick component. With that you could even do something silly like

for _ in range(1000):
    World.render()

Tick

And just one image would’ve been published