Disable rendering of unused cameras (ROS 2)

Hi!
I’m trying to deploy a simulated robot with a huge amount of cameras that are published through ROS2. That works just fine but there are a couple of features I was wondering if can be simulated without having to re-implement the whole Camera Helper.

  • While this applies to every ROS2 Graph Node, it’s worse when talking about heavy sensors (like IMU or cameras). These nodes are always publishing, without checking if there’s any subscriber willing to receive the message. Can that be somehow configured?
  • Once the CameraHelper is created, Viewport is ALWAYS rendered, even if the Tick does not get back to the CameraHelper again (and that’s because of the temporal graph that gets added). Can it be somehow disabled? Some sensors can probably be disabled by adding a Branch after the Tick, but it’s a whole different thing when talking about the implementation of the cameras.

Thanks in advance

Hi @christianbarcelo - Someone from our team will review and answer your question

Hi @christianbarcelo , are you using the latest Isaac Sim release? In the camera helper nodes you can now use render products. This will not create new viewports. The ROS2 camera and lidar helper nodes can use render products instead of viewports and the data will be sent over via the ROS2 bridge.

Hi @rchadha , thanks for the answer and I’m sorry for the delay on my side.

What you mentioned about the viewport is totally true, and by doing it that way we have the benefit of having the VRAM back whenever you pause/stop the simulation (that wasn’t happening with the viewports)
However, my first point differs a little from that.

  • Let’s suppose I have a robot with a lot of cameras, but not all of them are required at the same time. What I’d like then is:
    • If nobody is subscribed to the ROS 2 camera topic, the memory is not copied over from the camera’s buffer into the sensor_msgs::Image
    • Consequently, in the case where the camera is exclusively used through ROS 2, why are we expending resources on rendering the camera frame?

The second point might be a little tricky to implement, tho the first one is not and might cause a good improvement.

Hi @christianbarcelo , yes the render products should release the resources on pause/stop in Sim.

  1. ROS is designed in a way which ensures that publishers publish and subscribers subscribe irrespective of where/how/if the information is being used, right? You can create the camera and connect it to the ROS camera helper node in response to certain triggers via Python scripts. You could also disable/delete the input of the node itself when you don’t want to publish data out which will ensure that there is no copying of any render buffers to the Image message.
  2. Even if the camera is exclusively being used for ROS2, the rendered information would need to be acquired and then sent over in an Image message, right? Are you referring to any particular resources when you say rendering the camera frame?

What you say about ROS 2 design is true BUT, for performance reasons, I’m wondering if the camera publishers (in general, all of them) could check if there’s a subscriber listening to the topic (through count_subscribers) and:
1- Avoid copying the buffer into the message and/or
2- pause() the camera (as per API: pauses data collection and updating the data frame)

Even tho it’s true I could interfere with the ROS2CameraPublisher (and its variants for point clouds or depth images) to avoid them from publishing, the ROS2CameraHelper creates the graph behind the scenes and I don’t think that interfering with that automatically-generated graph is a good idea.

In the next release for Isaac Sim users should be able to have more seamless access to ROS APIs which will unblock your use case. You can look at camera_periodic.py which uses Simulation gates to control publishing from ROS2 (the node does not get triggered, hence no copy till that time)