How to speed up Ros 2 image publishing rate from Issac sim?

I have a turtlebot in Issac sim, and I have imported a realsense camera 455 from the drop down menu on the top. This has three camera prims within it.

I also added a custom camera prim with default configuration to see how that would work.

I publish the camera stream from all the cameras - here is a snippet of the graph for one camera :

It is from the examples : I get camera feed from Issac Render Product and then publish it through camera helper node.

However, my output rate for the images is terribly slow - 2-3hz at best, and I do not know what I can change to make it better.

I check by running ros2 topic hz on my local machine.

I am running this on a 64GB RAM, i9 14 gen intel, 4090 16 GB Nvidia GPU system, and I see CPU utilization hovering around 40% - so it is not throttling yet.

The rate does not change much if I publish 1 camera stream or three - if it was just proccessing bottleneck, then we should see a difference there?

Finally - when I subscribe to the topic from my device, and try to save the image or something, then the issac sim starts to slow down and glitch until I turn the simulation off.

Any easy way for me to access the raw camera stream from omniverse onto my code? It need not be through ROS as well. I want the frame data for my algorithms which are in python/c++.

Thanks

3 Likes

I donā€™t have a solution, but I am also seeing performance degradation of Isaac Sim when publishing Cameras through ROS and wanted to confirm the problem.

I was attempting to use the framesSkipCount option to decrease the data published but it does not help significantly. Even when I raise to very high number like 240 which would only publish every 4 seconds Isaac Sim is still very slow.

Hi @mattmazzola , @robotics-qc

Thank you for the posts.
Could you please share the scene and launch file of your case?
I check the frame rate of this nvblox sample but the topic hz is not that low.
https://nvidia-isaac-ros.github.io/concepts/scene_reconstruction/nvblox/tutorials/tutorial_isaac_sim.html

admin@toddt-Precision-3680:/workspaces/isaac_ros-dev$ ros2 topic hz /front_stereo_camera/left/image_raw
average rate: 17.726
        min: 0.004s max: 0.116s std dev: 0.02079s window: 20
average rate: 17.715
        min: 0.004s max: 0.116s std dev: 0.02298s window: 38

Thanks,
-Todd

Could you please share the scene and launch file of your case?

I donā€™t think I can share the exact files, but perhaps I can create a reduced set that reproduces the error.
I donā€™t have exact mapping of what you mean by scene and launch files. I am using code similar to a ā€œLoaded Sceneā€ extension which uses extension.py, ui_builder.py, and scenario.py

Perhaps by launch you mean launch.json and by scene you mean the scenario.py.

I will look into the sample you linked. Thanks!

I did not find a solution and would still like to know if other people can confirm the same performance degradation but I will add other details which people may find useful

Publishing to the topic does not cause issue
It is only when you setup a subscriber such as echoing topic or recording to a ROS bag, the performance of Isaac Sim degrades to an unusable level, less than 1 frame per second.

We confirmed the performance degradation is associated with the number of topics being recorded.
It is severely limited with only 1 topic recorded and gets marginally worse with more

Alternative methods to optimize existing ROS output pipeline?

  • We have already attempted the framesToSkip option without success šŸ˜”
  • We have confirmed issue exists across different extensions
  • Are QOS profile, and QueueSize helpful here?
    • I think qosProfile is unset, which ignores queue by default so explicitly setting queue size to 0 should have no effect
  • Can RenderProduct or ROS publish output an image file instead of pixel values to decrease size?
    • While reducing data size, perhaps this could put even more computation load on Isaac Sim
  • Dramatically reducing resolution output of RenderProduct
    • Sacrifices training data quality

Mitigation Solution: Manually capturing camera frames to local files and post processing to merge back into ROS bag

Given we couldnā€™t find a solution for ROS we implemented an alternative that gets us around ~20 fps. This is still slow but itā€™s usable data.

Capture the frame using something like

    def capture_single_camera_frame_to_dict(self, camera: Camera):
        current_frame = camera.get_current_frame(clone=False)
        return dict(
            rgb=current_frame["rgba"][:, :, :3],  # alpha is always 255 so we can ignore it
            depth=current_frame["distance_to_camera"],
        )

Save data to binary file using pickle for performance

I didnā€™t try that sample above but I think my co-worker may have more metrics to show here.
I think even with your rate of 17 hz / fps above, our solution may still be faster

If this is the expected performance of Isaac Sim and cameras for ROS I think this warrants putting a disclaimer in the product and documentation so developers to not waste time setting up the ROS output pipeline only to realize it will not meet their requirements.

Sorry - I had given up on this thread and just saw there has been some activity.

So I played around a little more and maybe it is some rendering mode issue - not sure.

Regarding files - I just load the USD from a .usd file where I have set up the robot and the action graphs for it, do not use a python launch.

It is interesting, when I use the issac sim moveit tutorials demo and launch an example there which has a realsense camera in it - i get a higher FPS - maybe 20-25fps but the same camera does not seem to be working well in my custom scene - not sure which settings are different.

I see that in the tutorial code they set some rendering mode and not yet verified if that makes a difference. I am new to Issac sim and have not used it through python based launching yet.

But I would like to know what should be our sensor output expectation for a good PC in a simple scene - it should not be this poor if I set up a camera following the tutorial (and if it is, then they should let us know so we do not waste time on it)

If you hear anything more about this, please let me know!