Speeding up simulation - 2023.1.1

I am using an AMR in a warehouse environment, using 2 cameras, one with 4K resolution and the other with normal resolution. Upon simulating it and publishing it to ROS2, the FPS drops to 6, with RTF going down to 0.1. I am using a standard RTX 3080 , 16 VRAM and 32 RAM. Please help me with a solution to improve the performance to achieve 0.9 RTF as gazebo does. (PS: Does isaac sim have internal topic publishing like gz sim ? Gz sim have their own list of topics and their ros2 bridge is simple. Am i missing anything ?) @Ayush_G @rthaker @toni.sm @ahaidu

Hi there,

can you check what is causing most of the performance drop?

The two cameras, the robot simulation, or maybe the environment?

Best,
Andrei

Hi ,

The simulation of the robot had 60 FPS . That is not the issue , but when I added cameras , the performance dropped (since I added one 4K camera and one RGBD). I felt the issue is that unlike gazebo , Isaac sim needs the camera needs to be rendered externally to produce image and publish the same . Is there any ways I can speed up Isaac ros2 bridge ? Or any ways I can avoid the create render product for the 4K camera ? Any suggestions is utmost welcome for the development

Can you provide the snippets on how you are creating the cameras?

Can you check the performance without using ROS and only creating the two render products by running this for example in the Script Editor:

import omni.replicator.core as rep
rp1 = rep.create.render_product("/OmniverseKit_Persp", (4096, 2160), name="rp_4k")
rp2 = rep.create.render_product("/OmniverseKit_Persp", (1280, 720), name="rp_hd")

Does upgrading to 4.0.0 chage anything?

Hi,

So initially I tried the example given on the omniverse documentation using replicators. That slowed down my performance (RTF=0.4) in Isaac sim 2023.1.1. I resorted to using the script written by pegasus simulator authors which initialized the camera by creating a view port. I saw some improvement in the performance, RTF > 0.7 but still started to show up issues as I amped by the number of low res camera and added the 4K camera. My ultimate goal is to simulate rgb and depth data of 5 cameras + a 4K camera to a manager script I’m using for a deep learning algorithm.

This was the script I used :

graph_config = {
            keys.CREATE_NODES: [
                ("on_tick", "omni.graph.action.OnTick"),
                ("create_viewport", "omni.isaac.core_nodes.IsaacCreateViewport"),
                ("get_render_product", "omni.isaac.core_nodes.IsaacGetViewportRenderProduct"),
                ("set_viewport_resolution", "omni.isaac.core_nodes.IsaacSetViewportResolution"),
                ("set_camera", "omni.isaac.core_nodes.IsaacSetCameraOnRenderProduct"),
            ],
            keys.CONNECT: [
                ("on_tick.outputs:tick", "create_viewport.inputs:execIn"),
                ("create_viewport.outputs:execOut", "get_render_product.inputs:execIn"),
                ("create_viewport.outputs:viewport", "get_render_product.inputs:viewport"),
                ("create_viewport.outputs:execOut", "set_viewport_resolution.inputs:execIn"),
                ("create_viewport.outputs:viewport", "set_viewport_resolution.inputs:viewport"),
                ("set_viewport_resolution.outputs:execOut", "set_camera.inputs:execIn"),
                ("get_render_product.outputs:renderProductPath", "set_camera.inputs:renderProductPath"),
            ],
            keys.SET_VALUES: [
                ("create_viewport.inputs:viewportId", 0),
                ("create_viewport.inputs:name", f"{self._namespace}/{self._frame_id}"),
                ("set_viewport_resolution.inputs:width", self._resolution[0]),
                ("set_viewport_resolution.inputs:height", self._resolution[1]),
            ],
        }

        # Add camerasHelper for each selected camera type
        valid_camera_type = False
        for camera_type in self._types:
            if not camera_type in ["rgb", "depth", "depth_pcl", "semantic_segmentation", "instance_segmentation", "bbox_2d_tight", "bbox_2d_loose", "bbox_3d", "camera_info"]:
                continue

            camera_helper_name = f"camera_helper_{camera_type}"

            graph_config[keys.CREATE_NODES] += [
                (camera_helper_name, "omni.isaac.ros2_bridge.ROS2CameraHelper")
            ]
            graph_config[keys.CONNECT] += [
                ("set_camera.outputs:execOut", f"{camera_helper_name}.inputs:execIn"),
                ("get_render_product.outputs:renderProductPath", f"{camera_helper_name}.inputs:renderProductPath")
            ]
            graph_config[keys.SET_VALUES] += [
                (f"{camera_helper_name}.inputs:nodeNamespace", self._namespace),
                (f"{camera_helper_name}.inputs:frameId", self._tf_frame_id),
                (f"{camera_helper_name}.inputs:topicName", f"{self._base_topic}/{camera_type}"),
                (f"{camera_helper_name}.inputs:type", camera_type)
            ]

            # Publish labels for specific camera types
            if self._publish_labels and camera_type in ["semantic_segmentation", "instance_segmentation", "bbox_2d_tight", "bbox_2d_loose", "bbox_3d"]:
                graph_config[keys.SET_VALUES] += [
                    (camera_helper_name + ".inputs:enableSemanticLabels", True),
                    (camera_helper_name + ".inputs:semanticLabelsTopicName", f"{self._frame_id}/{camera_type}_labels")
                ]

Having so many cameras will have a significant effect on performance since it renders the scene every update. If you do not need a constant stream of image data, you can also toggle the render products to only render when you actually need the data, this way the simulation will run faster and will only slow down when the extra rendering happens.

See here for similar examples:

Hi Andrei,

Thanks for the suggestions, but since I’m using stereo data, I would want all the cameras active. But I would like to know something from a developer’s perspective -

→ How can I speed up ROS2 Bridge?
→ How can I get camera data without creating a render product?
→ Is there a way I can activate the bridge after the data is being published, outside the simulating environment?

Hi @vyachu07

For the first point: it is mostly dependent on render products. To improve the performance you can reduce the dimensions of the renderproduct and retrieve a smaller image. We are looking into incorporating compressed images to increase publish frequency.

Second point: You will need a render product in order to get camera data.

Third point: What do you mean by activating after data being published? If you mean to deactivate the isaac sim ros bridge from an external ros node, there currently isnt a great way to achieve this. Same goes for toggling the bridge from an external ros node.