Hello, I’m simulating a small environment (2x robot arms) with 3 RGB cameras (pinhole, 640x400, 30fps).
- I get about 55fps in Sim without any cameras running.
- When I create the 3 Camera objects, Sim runs at about 20fps.
- When I connect the cameras to publish on ROS2 topics via ActionGraph, Sim runs at about 13fps.
- And when running ‘headless: native’, I still get about 13fps.
My target for Sim is 30fps.
Is there a different or modified workflow with which I can get better performance on the same hardware? And if better hardware is the answer, what setup would likely achieve my target? Thanks!
For reference, the setup above is with RTX 3080 (10 GB) and Ryzen 7 3700X with 64 GB on Ubuntu 24.04. Isaac Sim is version 2023.1.0 running in Docker.
Your first step is to upgrade to the latest Isaac Sim. If you’re planning to work with Jetson later on, I would recommend using Ubuntu 22.04 to match Jetpack 6.
You’re also within the hardware requirements but a little on the low end. 55fps without cameras running is already a little slow. Check out the recommendations here:
https://docs.omniverse.nvidia.com/isaacsim/latest/installation/requirements.html
You will get much higher performance from a 4090 or even an ada 6000
Hello, thank you for the suggestions. I’ve upgraded to Isaac Sim 4.1.0 and I’m getting about 13.5fps. I’ll have to see about a hardware upgrade, I guess the virtual cameras are pretty resource hungry for RT cores.
I upgraded to a 4080 super and performance is about the same (~13 fps). I did notice that the GPU is running at about 100W / 320W if that’s useful, with the Isaac Sim process using 2.5 of 8 pCPU cores.
What’s your frame rate without a sim going? Can you save your scene or a non-proprietary scene as flattened and attach it so that I can compare my frame rate to yours?
I ended up removing the ROS2CameraHelpers from the action graph, though I’m still using the Camera class. I use a different image transport mechanism now and that brought up the fps from ~15 to ~20 in my standalone app with sim running.
When I stop the sim run, I get ~29 fps in the HUD. When I save the scene as flattened to .usd and open it (no standalone app running), I get ~60 fps from the perspective camera. Looking at the ActionGraph shows about 10 of the action graph nodes are not connected like they are in the .usd file. So it seems like the ActionGraph doesn’t connect with my standalone app when loading from .usd which could be why the sim loop is so much faster.
I opened a sample warehouse scene and I was seeing a higher frame rate without simulation but a similar frame rate with simulation running (20fps).
I selected the physicsScene prim and selected “Enable GPU Dynamics” and my frame rate during sim has gone up to 60fps. Once you enable this setting you might need to play with other physics scene settings to get a nice, stable simulation, but if you want a >30fps simulation you’ll need to do it on the GPU.
I’m trying to prepare a simple scenario to create a topic here, but since this is reporting a similar issue, here it goes:
- I have a fairly simple scenario with a robot that has no cameras → Really decent FPS
- As soon as I enable one camera using ROS2Helper (I’ve also tried through code tho, but the results are similar) the FPS drop drastically and the RTF gets down to 0.4/0.5.
- Even worse, measured with simulated time, the camera gets published at no more than 20hz, when the Context’s rendering dt is set to
1/60
. Any other ROS 2 topic that does not require rendering, is published at the correct rate, but the cameras get really behind in a simulation where physics SHOULD wait for rendering.
I’m using an RTX 4090, so I don’t think it’s an issue with resources, and even if it was, the simulation should get slowed down the same in terms of physics and rendering.
Have you enabled GPU dynamics on the GPU in the Physics Scene prim?
I gave that a try and it doesn’t raise the typical FPS of the sim (still ~20fps with sim cameras running but not in action graph). GPU dynamics seems to help bring up the dips in performance when there is interaction in the scene, so thanks for that suggestion.
Can you share your scene or a similar scene I can use to reproduce the performance issues you’re seeing?
Running a similar scenario. I was using the python api to get frames of the camera on demand.
However, I still get some performance issues when having the cameras even though I am not constantly using the rendering output. It seems that the RenderProduct once instances it constantly renders even though there are no consumers.
Therefore, massive performance drop: Using a 4090 as well.
The render product appears to be the bottleneck for me as well, I was stepping my sim with simulation_context.step(render=True)
for each tick and getting rendered virtual camera images every sim tick (default 60 Hz). I only want virtual camera images at 30 fps of sim time, so I updated my sim loop so render=True on even frames and False for odd frames. This results in the same viewport fps as before (~20fps) but with effective sim time measured at ~38 fps.
So this doesn’t address the raw rendering performance but it was helpful for me to run physics at closer to realtime, and implies controlling camera framerate by render step is required on top of setting Camera object fps.
I’ll try to get something to post here this weekend, thanks!
Also, can you share your docker container and launch command?
1 Like
isaacdebug.zip (4.2 MB)
Here’s a small app showing the drop in performance when using the sensors Camera class. Without using the Camera class, I get about 60 fps runtime. With the Camera class, I get about 25 fps. The scene is just a grid, a UR5e, and a gripper. This app is set to render for every tick.
The app can be run with make app
from inside of the folder. There is a bool in the app.py script to turn off the use of the Camera class.
if you open the Profiler (F5), you can see what part of your app is taking time.
I ran your app and most of the time is spent on the RTX render tile function. There are 4 of them, 3 for your camera, and 1 for the Perspective camera rendering in the app viewport (prim path is ‘/OmniverseKit_Persp’)
Disable this viewport rendering can get you back around 10ms of GPU time per frame.
I thought running in headless mode could do that, but was dissappointed to find out it doesn’t.
I thought setting the viewport to use one of your 3 cameras could do that, but again it doesn’t.
Guess it’s going to render all cameras no matter which one is being used and not.
i had to call omni.isaac.core.utils.viewports.destroy_all_viewports(), then I got a boost from 22 to about 34-35 fps.
Downside is it gave me a bunch of RuntimeError where there is no default viewport.
1 Like
Hey, thanks for looking into it! I was also surprised that headless mode didn’t have better performance, thanks for the explanation as to why. I’m now getting 37 fps (and a bunch of that RuntimeError) after destroying the viewports as you suggested!
I’ll give it a little time to see if any of the NVIDIA staff have more to add to this. If they don’t then I’ll mark your post as the solution.
How bad are those runtime errors? what is the impact?
I am getting this error over and over but seems as if it doesn’t have any impact:
Traceback (most recent call last):
File "/home/mmatak/miniconda3/envs/isaaclab/lib/python3.10/site-packages/isaacsim/extsPhysics/omni.physx.ui/omni/physxui/scripts/extension.py", line 81, in on_stage_update
camera = ViewportCameraState()
File "/home/mmatak/miniconda3/envs/isaaclab/lib/python3.10/site-packages/isaacsim/extscache/omni.kit.viewport.utility-1.0.17+10a4b5c0/omni/kit/viewport/utility/camera_state.py", line 48, in __init__
raise RuntimeError("No default or provided Viewport")
RuntimeError: No default or provided Viewport