Nvidia Omniverse doesn’t seem to make optimal use of the hardware in my system.
In more demanding projects, where the framerate sometimes drops to 4 FPS, I still only have a GPU utilization of about 20% and a CPU utilization of about 23%.
Interestingly, the hardware utilization changes when I run Omniverse in the background (GPU 10%, CPU 12%, FPS drops also by a factor of 2).
Are there any settings I can use to optimize hardware utilization? I have already enabled Physx Flatchache.
Hello @axel.goedrich! I’ve brought this to the developer’s attention. I know that the they are currently focused on optimization and stabilization, so I would expect some improvements in the very near future!
In Preferences, you can reduce the “Yield ‘ms’ while not in focus” value to limit the perf degradation when not in focus.
To see the actual GPU utilization, not the OS’ reported utilization, you can go in Extensions/Utilities/Profiler and enable GPU Profiler. Now in Window/Utilities/Statistics, GPU Utilization will be available in the drop-down list and enabled as long as GPU Profiler is still running. We’ll improve the user-experience for this in a future release to make this more straight-forward.
Do you have an example scene, or more details on what the scene consists of? It might be CPU-bound.
We are currently focusing on contact rich physics simulations for robotics, and I was experimenting with the physics step size. We are mainly using signed distant field based collisions. The performance we achieve is very good, but I was wondering about the unused resources (according to the windows task manager).
For example, in the Franka Nut and Bolt demo scene from the Physx Previews, I won’t get above 25% GPU utilization and 20% CPU utilization (with 16 robots and 24 nuts). I get about 25 FPS with these settings, which is in my opinion really impressive. But I was just wondering if some tweaking with settings could result in a better utilization.
I also tried the Profiler and statistics window, but I somehow only get the utilization of GPU0 (which is the integrated Intel HD graphics). Is the actual GPU utilization so much different from the OS reported utilization?
To see the CPU utilisation its best to use the tracy profiler, enable the profiler window extension and tracy profiler extension. Then press F5 do some work and again F5 this should display then tracy profiler and you can see the CPU utilisation.
As for simulation it should include PhysX SDK simulation zones.
Also if you are not interacting with the simulation every frame through python code, its possible to run async simulation (on a physics scene in advanced check async sync). This way the GPU scheduling is better and should give you significant performance boost.