Destroying the viewports highly improves performance but spams an error message

Isaac Sim Version

4.5.0

Operating System

Ubuntu 22.04

GPU Information

  • Model: RTX 4090
  • Driver Version: 570

Topic Description

Detailed Description

Viewports seems to be created during headless runs. By destroying it, performance reaches a high peak which is appreciated, but it looks like a crossed-dependency makes the viewport always required, otherwise the following error spams constantly (even though it does not affect the simulation, and performance keeps running at a higher than usual rate)

[Error] [omni.kit.app._impl] [py stderr]: RuntimeError: No default or provided Viewport
[Error] [omni.kit.app._impl] [py stderr]: Traceback (most recent call last):
[Error] [omni.kit.app._impl] [py stderr]:   File "/home/pontius/.local/share/ov/pkg/isaac-sim-4.5.0/extscache/omni.kit.viewport_widgets_manager-1.0.8+d02c707b/omni/kit/viewport_widgets_manager/manager.py", line 387, in _on_update
[Error] [omni.kit.app._impl] [py stderr]:     active_camera_path = viewport_window.viewport_api.camera_path
[Error] [omni.kit.app._impl] [py stderr]: AttributeError: 'NoneType' object has no attribute 'viewport_api'
[Error] [omni.kit.app._impl] [py stderr]: Traceback (most recent call last):
[Error] [omni.kit.app._impl] [py stderr]:   File "/home/pontius/.local/share/ov/pkg/isaac-sim-4.5.0/extsPhysics/omni.physx.ui/omni/physxui/scripts/extension.py", line 83, in on_stage_update
[Error] [omni.kit.app._impl] [py stderr]:     camera = ViewportCameraState()
[Error] [omni.kit.app._impl] [py stderr]:   File "/home/pontius/.local/share/ov/pkg/isaac-sim-4.5.0/extscache/omni.kit.viewport.utility-1.0.18+d02c707b/omni/kit/viewport/utility/camera_state.py", line 48, in __init__
[Error] [omni.kit.app._impl] [py stderr]:     raise RuntimeError("No default or provided Viewport")
[Error] [omni.kit.app._impl] [py stderr]: RuntimeError: No default or provided Viewport

Steps to Reproduce

  1. Using a python standalone workflow or a full ui using isaac-sim.sh with the script editor, run:
from isaacsim.core.utils.viewports import destroy_all_viewports
destroy_all_viewports()

Expected behavior:

Being able to destroy the viewports to get some extra performance should be doable, and crossed dependencies should be solved.

Hi @christianbarcelo!
When you start Isaac Sim, it will create default viewports. You can try destroy_all_viewports(None, False) to keep the main viewport.
Will this work for your purpose?

Or instead of destroying all viewports it’s also an option to disable viewport updates which will similarly save perf when running in headless mode and doesn’t result in any error messages.

from omni.kit.viewport.utility import get_active_viewport

viewport = get_active_viewport()
if viewport:
   viewport.updates_enabled = False

That does help, but the main viewport still consumes valuable resources. The difference in performance running headless with and without that main viewport is big enough as to consider (IMHO) fixing that dependency issue.
Additionally, as a suggestion, maybe adding instead of headless true/false in the SimulationApp config a separate configuration to enable/disable viewports, that way you could have a full UI app, a headless with viewport useful for streaming client, and full headless with no viewport at all.

Hi @christianbarcelo In Isaac Sim 5.0 there will be an additional option in SimulationApp config to disable viewport updates when running headless. It does not destroy the viewports but saves a lot of compute by not rendering them when disabled.

1 Like

Just curious but performance for what exactly? Rendering out, re-enforcement learning, SDG? You have a very powerful card. How much additional speed do you need? How much difference are we talking about? Can you provide metrics?

So your task goes from 40mins to 30mins for example. Is that significant to your workflow? Or is more like 200hrs of calculation down to 150hrs?

Let’s say it saves you 20% in rendering time / calculation time etc. Adding a second card would save you 50%.

Even though I have a very powerful card, simulation soon falls down in real time factor terms if you have some cameras and/or 3D lidars, Isaac Sim is famous for the high resource consumption.
Given I work as a consultant, I’ve participated on several different types of simulations and I can say confidently that running headless WITHOUT viewports enabled have helped increasiong RTF in about 40% (just took this number from the last project I included this change to), even letting the sim work in real time. Of course how much you get depends on how heavy was your simulation to begin with.
With SDG projects where RTF is not a thing but the velocity you get the data generated at, and taking into account PathTracing is the preferred rendering mode (again, this is because RTF is not a thing when collecting data) I’ve even seen ~60% performance boost.

I don’t participate too often in RL projects, so simulation for SITL/CI and SDG are the type of projects I can speak for the most.

Then there are few other things to consider:

  • If you’re working on an SDG project, then more performance implies:
    • Data generated quicker, or probably more data, always appreciated
    • Cloud instance/s costs reduced (if they’re budget per-hour)
  • If the project is meant for SITL/CI:
    • The more it takes for your robot to run a single “mission/task” because of a low RTF, the bigger the cost of the instance when running CI. The more it takes for the developer to run locally the first basic tests before sending the changes for the cloud testing suite to run, the more it’ll take for the developer to get to the stable solution they’re looking for. The more it takes for the developer to run simple tests in sim before testing in robot, the more time it takes for features to be deployed.
    • If you have a big team and don’t want to give everybody a computer with a 4090 but instead rely on on-demand Isaac Sim-ready cloud instances, the more it takes to each developer to run their tests, the more instances you will require in parallel, and the higher the cost.
    • This is even more challenging the more RTX sensors you have, bad news for a world where most robots have now at least 2, 3 cameras or so + a 3D lidar.

Workarounds as these (hopefully not a workaround but a visible feature in the future) have a huge impact and is what helps me keep clients happy with Isaac Sim.

Thanks for your detailed reply. I totally agree and understand the idea yes. It is something I will have to bring up to the RTX engineers as an idea for “true” headless running.

The only thing I would say is that I would try to use RTX Realtime or even the new RTX Realtime 2.0 for rendering. It is very very close to path tracing with just a fraction of the time to render.

For SDG we prefer realism over velocity, even though we want to achieve as much velocity as possible, so Path Tracing is in general the way to go. But for physics simulation (like connecting with the robot/s’ stack realtime for dev testing) we gotta manage that trade off and use RYX, but I hadn’t heard about RTX 2.0, where can I read about it? Is that already on the field, an extension (as iRay for example) or something that will come out with the next release?

You will find that RTX Realtime 1.0 is closer than you think to path tracing. Sometimes identical. You only need to use PT for things with complex refraction and reflection. Other than that, they are basically the same. In fact I prefer RTX Realtime output.

If you are using Isaac Sim 4.5 or the new 5.0, OUT TODAY, you can enable RTX Realtime 2.0 in the Preferences > Rendering Menu.

Same with USD Composer.

RTX Realtime 2.0 is FAR superior. It is the near quality of Path tracing with the speed of REALTIME.

I have an eye on the Releases list and the NGC Catalog, looking forward to the 5.0 version that you said is OUT TODAY!

Sometimes identical. You only need to use PT for things with complex refraction and reflection.

Warehouses are one of the preferred areas of work of most robots (not the only one, but one where robots are pretty popular), and things like shrink wrap are present there and are especially peculiar when speaking about refraction, one of the reasons why SDG WITH PATH TRACING is generally preferred, and where things as disabling hydra textures between captures when using PT + Physics or destroying the viewports to get some extra performance is a must.

Thanks for the pointer to RTX 2.0. It looks neat, closer to Path Tracing than RTX 1.0, but much, much faster.
In terms of performance, I just measured it and, at least for the simulation I’m working with right now, it has a rendering time twice as long as RTX 1.0. Still much better than Path Tracing, but if performance prevails against quality, RTX 1.0 is still preferred.

Thanks again for the info!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.