I am running this on a 64GB RAM, i9 14 gen intel, 4090 16 GB Nvidia GPU system, and I see CPU utilization hovering around 40% - so it is not throttling yet.
The rate does not change much if I publish 1 camera stream or three - if it was just proccessing bottleneck, then we should see a difference there?
Finally - when I subscribe to the topic from my device, and try to save the image or something, then the issac sim starts to slow down and glitch until I turn the simulation off.
Any easy way for me to access the raw camera stream from omniverse onto my code? It need not be through ROS as well. I want the frame data for my algorithms which are in python/c++.
I donāt have a solution, but I am also seeing performance degradation of Isaac Sim when publishing Cameras through ROS and wanted to confirm the problem.
I was attempting to use the framesSkipCount option to decrease the data published but it does not help significantly. Even when I raise to very high number like 240 which would only publish every 4 seconds Isaac Sim is still very slow.
Could you please share the scene and launch file of your case?
I donāt think I can share the exact files, but perhaps I can create a reduced set that reproduces the error.
I donāt have exact mapping of what you mean by scene and launch files. I am using code similar to a āLoaded Sceneā extension which uses extension.py, ui_builder.py, and scenario.py
Perhaps by launch you mean launch.json and by scene you mean the scenario.py.
I did not find a solution and would still like to know if other people can confirm the same performance degradation but I will add other details which people may find useful
Publishing to the topic does not cause issue
It is only when you setup a subscriber such as echoing topic or recording to a ROS bag, the performance of Isaac Sim degrades to an unusable level, less than 1 frame per second.
We confirmed the performance degradation is associated with the number of topics being recorded.
It is severely limited with only 1 topic recorded and gets marginally worse with more
Alternative methods to optimize existing ROS output pipeline?
We have already attempted the framesToSkip option without success š
We have confirmed issue exists across different extensions
Are QOS profile, and QueueSize helpful here?
I think qosProfile is unset, which ignores queue by default so explicitly setting queue size to 0 should have no effect
Can RenderProduct or ROS publish output an image file instead of pixel values to decrease size?
While reducing data size, perhaps this could put even more computation load on Isaac Sim
Dramatically reducing resolution output of RenderProduct
Sacrifices training data quality
Mitigation Solution: Manually capturing camera frames to local files and post processing to merge back into ROS bag
Given we couldnāt find a solution for ROS we implemented an alternative that gets us around ~20 fps. This is still slow but itās usable data.
Capture the frame using something like
def capture_single_camera_frame_to_dict(self, camera: Camera):
current_frame = camera.get_current_frame(clone=False)
return dict(
rgb=current_frame["rgba"][:, :, :3], # alpha is always 255 so we can ignore it
depth=current_frame["distance_to_camera"],
)
Save data to binary file using pickle for performance
I didnāt try that sample above but I think my co-worker may have more metrics to show here.
I think even with your rate of 17 hz / fps above, our solution may still be faster
If this is the expected performance of Isaac Sim and cameras for ROS I think this warrants putting a disclaimer in the product and documentation so developers to not waste time setting up the ROS output pipeline only to realize it will not meet their requirements.
Sorry - I had given up on this thread and just saw there has been some activity.
So I played around a little more and maybe it is some rendering mode issue - not sure.
Regarding files - I just load the USD from a .usd file where I have set up the robot and the action graphs for it, do not use a python launch.
It is interesting, when I use the issac sim moveit tutorials demo and launch an example there which has a realsense camera in it - i get a higher FPS - maybe 20-25fps but the same camera does not seem to be working well in my custom scene - not sure which settings are different.
I see that in the tutorial code they set some rendering mode and not yet verified if that makes a difference. I am new to Issac sim and have not used it through python based launching yet.
But I would like to know what should be our sensor output expectation for a good PC in a simple scene - it should not be this poor if I set up a camera following the tutorial (and if it is, then they should let us know so we do not waste time on it)
If you hear anything more about this, please let me know!