I am using a NVIDIA RTX 3080 to run isaac sim. With 50 simple robots with (two wheels and a laser scanner), the frame time is 50milliseconds. I tried to measure the real time taken and it seems that for 50 robots 10 seconds in simulation time is 30 seconds in real time. That is real time factor is around 0.3.
I tried the same thing with the multi-robot extension with similar slow downs.
Anyone knows what the constraint is? If I look at my CPU (10%)/GPU (30%)/RAM(5%) usage - it is not really crazy.
Pictures of GPU/CPU usage attached.
I just increased my swap memory to 10GB with no luck - just saying
Can you press F8 to open up the profiler, enable CPU profiling and set the CPU depth to 10, press pause updates on the profiler when your simulation is running and provide a screenshot
Hi, I followed your instructions - it seemed to speed up the simulation slightly - I could not measure the RTF. But fps was at 24. The screenshot you asked for is attached:
Are you using GPU physics? I don’t really know why, but GPU physics is extremely slow compared to CPU physics.
Apologies for the late reply,
This is a larger issue with Physics->USD synchronization that is being improved using omniverse fabric (formerly called flatcache) if you enable physx.flatcache the large time it takes to update render transforms will go down, unfortunately a lot of the APIs in omni.isaac.core and the ros/ros2 bridges rely on usd directly to read parameters. In some cases (like only ready physics values during simulation) this won’t be an issue.
We are working on enabling fabric integration by default to reduce the usd synchronization cost during simulation, but it a large set of changes, and likely on part of the full set of performance improvements will be made for the next release in december.
I stuck same with multirobot using ROS2, the compiler show the usd synchronization take so much cpu.
Do new version have any update to speed up usd synchronization?