Hello everyone,
I have been unsuccessfully trying to publish 2D & 3D Bounding Boxes using the ROS2 bridge and control the execution rate, at which the messages are published.
- First attempt
At first I adapted the following standalone example, as recommended in 7.2.12.4.2.1. Periodic Image Publishing :
./python.sh standalone_examples/api/omni.isaac.ros_bridge/camera_periodic.py
I’ve created a CameraHelper-Node similar to the one used to publish the other camera outputs (like depth for instance):
# Creating an on-demand push graph with cameraHelper nodes to generate ROS image publishers
keys = og.Controller.Keys
(ros_camera_graph, _, _, _) = og.Controller.edit(
{
"graph_path": ROS_CAMERA_GRAPH_PATH,
"evaluator_name": "push",
"pipeline_stage": og.GraphPipelineStage.GRAPH_PIPELINE_STAGE_ONDEMAND,
},
{
keys.CREATE_NODES: [
("OnTick", "omni.graph.action.OnTick"),
("createViewport", "omni.isaac.core_nodes.IsaacCreateViewport"),
("getRenderProduct", "omni.isaac.core_nodes.IsaacGetViewportRenderProduct"),
("setCamera", "omni.isaac.core_nodes.IsaacSetCameraOnRenderProduct"),
("cameraHelperRgb", "omni.isaac.ros2_bridge.ROS2CameraHelper"),
("cameraHelperInfo", "omni.isaac.ros2_bridge.ROS2CameraHelper"),
("cameraHelperDepth", "omni.isaac.ros2_bridge.ROS2CameraHelper"),
("cameraBoundingBoxes2DTight", "omni.isaac.ros2_bridge.ROS2CameraHelper"),
],
keys.CONNECT: [
("OnTick.outputs:tick", "createViewport.inputs:execIn"),
("createViewport.outputs:execOut", "getRenderProduct.inputs:execIn"),
("createViewport.outputs:viewport", "getRenderProduct.inputs:viewport"),
("getRenderProduct.outputs:execOut", "setCamera.inputs:execIn"),
("getRenderProduct.outputs:renderProductPath", "setCamera.inputs:renderProductPath"),
("setCamera.outputs:execOut", "cameraHelperRgb.inputs:execIn"),
("setCamera.outputs:execOut", "cameraHelperInfo.inputs:execIn"),
("setCamera.outputs:execOut", "cameraHelperDepth.inputs:execIn"),
("setCamera.outputs:execOut", "cameraBoundingBoxes2DTight.inputs:execIn"),
("getRenderProduct.outputs:renderProductPath", "cameraHelperRgb.inputs:renderProductPath"),
("getRenderProduct.outputs:renderProductPath", "cameraHelperInfo.inputs:renderProductPath"),
("getRenderProduct.outputs:renderProductPath", "cameraHelperDepth.inputs:renderProductPath"),
("getRenderProduct.outputs:renderProductPath", "cameraBoundingBoxes2DTight.inputs:renderProductPath"),
],
keys.SET_VALUES: [
("createViewport.inputs:viewportId", 0),
("cameraHelperRgb.inputs:frameId", "sim_camera"),
("cameraHelperRgb.inputs:topicName", "rgb"),
("cameraHelperRgb.inputs:type", "rgb"),
("cameraHelperInfo.inputs:frameId", "sim_camera"),
("cameraHelperInfo.inputs:topicName", "camera_info"),
("cameraHelperInfo.inputs:type", "camera_info"),
("cameraHelperDepth.inputs:frameId", "sim_camera"),
("cameraHelperDepth.inputs:topicName", "depth"),
("cameraHelperDepth.inputs:type", "depth"),
("cameraBoundingBoxes2DTight.inputs:frameId", "sim_camera"),
("cameraBoundingBoxes2DTight.inputs:topicName", "bbox_2d_tight"),
("cameraBoundingBoxes2DTight.inputs:type", "bbox_2d_tight"),
],
},
)
Afterward I’ve tried to get the IsaacSimulationGate associated with the render product and the bounding boxes, because that’s where the execution rate is supposed to be defined:
# Inside the SDGPipeline graph, Isaac Simulation Gate nodes are added to control the execution rate of each of the ROS image and camera info publishers.
# By default the step input of each Isaac Simulation Gate node is set to a value of 1 to execute every frame.
# We can change this value to N for each Isaac Simulation Gate node individually to publish every N number of frames.
viewport_api = get_active_viewport()
if viewport_api is not None:
import omni.syntheticdata._syntheticdata as sd
# Get name of rendervar for RGB sensor type
rv_rgb = omni.syntheticdata.SyntheticData.convert_sensor_type_to_rendervar(sd.SensorType.Rgb.name)
# Get path to IsaacSimulationGate node in RGB pipeline
rgb_camera_gate_path = omni.syntheticdata.SyntheticData._get_node_path(
rv_rgb + "IsaacSimulationGate", viewport_api.get_render_product_path()
)
# Get name of rendervar for DistanceToImagePlane sensor type
rv_depth = omni.syntheticdata.SyntheticData.convert_sensor_type_to_rendervar(
sd.SensorType.DistanceToImagePlane.name
)
# Get path to IsaacSimulationGate node in Depth pipeline
depth_camera_gate_path = omni.syntheticdata.SyntheticData._get_node_path(
rv_depth + "IsaacSimulationGate", viewport_api.get_render_product_path()
)
# Get path to IsaacSimulationGate node in CameraInfo pipeline
camera_info_gate_path = omni.syntheticdata.SyntheticData._get_node_path(
"PostProcessDispatch" + "IsaacSimulationGate", viewport_api.get_render_product_path()
)
# Get name of rendervar for Bounding Box 2D Tight sensor type
rv_bbx2dtight = omni.syntheticdata.SyntheticData.convert_sensor_type_to_rendervar(sd.SensorType.BoundingBox2DTight.name)
# Get path to IsaacSimulationGate node in Bounding Box 2D Tight pipeline
bbx2dtight_camera_gate_path = omni.syntheticdata.SyntheticData._get_node_path(
rv_bbx2dtight + "IsaacSimulationGate", viewport_api.get_render_product_path()
)
Unfortunately the IsaacSimulationGate node whose path is
/Render/PostProcess/SDGPipeline/RenderProduct_omni_kit_widget_viewport_ViewportTexture_0_BoundingBox2DTightSDIsaacSimulationGate
does not exist.
So that the following line
og.Controller.attribute(bbx2dtight_camera_gate_path + ".inputs:step").set(10)
will generate an error :
omni.graph.core._impl.errors.OmniGraphValueError: Could not find OmniGraph node from node description '/Render/PostProcess/SDGPipeline/RenderProduct_omni_kit_widget_viewport_ViewportTexture_0_BoundingBox2DTightSDIsaacSimulationGate' - Node path '/Render/PostProcess/SDGPipeline/RenderProduct_omni_kit_widget_viewport_ViewportTexture_0_BoundingBox2DTightSDIsaacSimulationGate' was not an OmniGraph node
If I skip this line, at the end, I can see the bounding boxes being published but I cannot change the publishing rate:
- Second attempt
I’ve tried this time to follow this example:
7.2.4. Publishing Camera’s Data
Same principle as before: I adapt the example.
def publish_2DBoundingBoxes(camera: Camera, freq):
# The following code will link the camera's render product and publish the data to the specified topic name.
render_product = camera._render_product_path
step_size = int(60/freq)
topic_name = camera.name+"_Bbox2D"
queue_size = 1
node_namespace = ""
frame_id = camera.prim_path.split("/")[-1] # This matches what the TF tree is publishing.
rv = omni.syntheticdata.SyntheticData.convert_sensor_type_to_rendervar(
sd.SensorType.BoundingBox2DTight.name
)
writer = rep.writers.get("ROS2PublishBoundingBox2DTight")
writer.initialize(
frameId=frame_id,
nodeNamespace=node_namespace,
queueSize=queue_size,
topicName=topic_name
)
writer.attach([render_product])
# Set step input of the Isaac Simulation Gate nodes upstream of ROS publishers to control their execution rate
gate_path = omni.syntheticdata.SyntheticData._get_node_path(
"BoundingBox2DTightSD" + "IsaacSimulationGate", render_product
)
og.Controller.attribute(gate_path + ".inputs:step").set(10)
return
It throws the same error as in the first attempt, since the IsaacSimulationGate node whose path is
/Render/PostProcess/SDGPipeline/RenderProduct_Replicator_BoundingBox2DTightSDIsaacSimulationGate
does not exist.
Error:
raise og.OmniGraphValueError(f"Could not find OmniGraph node from node description '{node}' - {error}")
omni.graph.core._impl.errors.OmniGraphValueError: Could not find OmniGraph node from node description '/Render/PostProcess/SDGPipeline/RenderProduct_Replicator_BoundingBox2DTightSDIsaacSimulationGate' - Node path '/Render/PostProcess/SDGPipeline/RenderProduct_Replicator_BoundingBox2DTightSDIsaacSimulationGate' was not an OmniGraph node
I get the same result as in the first attempt, when skipping this line:
og.Controller.attribute(gate_path + ".inputs:step").set(10)
The bounding boxes are published but at an “uncontrolled” rate.
To sum up: how do I control those publishing rates ?
Thanks.
Arnaud