ROS 2 bridge Camera Helper issues

Are there any ways to edit or rebuild the ROS2 bridge to so that I can fix issues with the message Isaac publishes? I’m testing with Isaac Sim 2022.2.0 and its ability to publish class labeled information to ROS but none of the publishers send valid data.

If I make a camera that publishes instance_segmentation information and try to view it in ROS it’s in the wrong format to view them in tools like RViz or RQT. Is there a way to get the pixel color value to class_id map?
isaac_encoding_error

If I change the camera type to bbox_2d_loose/tight then no data is published on the topic.

If I change the camera to publish bbox_3d data then it sends message that are not properly filled out or in the correct scale.
isaac_3d_bbox_array

If I try and echo the message with the new Humble bridge enabled I can’t inspect them but if I switch to the Foxy bridge I can see that it’s not filling in the data correctly and is missing the class_id which means the bounding box information is not very useful to anyone subscribing.

header:
  stamp:
    sec: 14
    nanosec: 450000753
  frame_id: carter_camera_stereo_left
detections:
- header:    ####### Not filled in!
    stamp:
      sec: 0
      nanosec: 0
    frame_id: ''   
  results:
  - hypothesis:        ####### There is only a single hypothesis for all bounding boxes even though there are multiple class_id's included in the field of view.
      class_id: ''     ####### Not filled in!
      score: 1.0
    pose:
      pose:
        position:
          x: 0.0
          y: 0.0
          z: 0.0
        orientation:
          x: 0.0
          y: 0.0
          z: 0.0
          w: 1.0
      covariance:
      - 0.0       ###### all zeros
  bbox:
    center:
      position:
        x: -9.05989933013916
        y: -0.7591999769210815
        z: 0.7092979550361633
      orientation:
        x: 0.0
        y: 0.0
        z: 0.0
        w: 1.0
    size:                ####### Not in the correct units of meters, but after checking it looks like "carter_warehouse_navigation.usd` is in CM but the world units are Meters.
      x: 37.9998779296875 
      y: 24.99999237060547
      z: 14.875
  id: ''    ####### Not filled in!
- header:
    stamp:
      sec: 0
      nanosec: 1
    frame_id: ''    ####### Not filled in!
  results: []
  bbox:
    center:
      position:  ####### Not filled in!
        x: 0.0
        y: 0.0
        z: 0.0
      orientation:
        x: 5.0e-324
        y: 0.0
        z: 5.0e-324
        w: 2.1219957915e-314
    size:    ####### Not filled in correctly!
      x: 0.0
      y: 1.0
      z: 0.0
  id: ''     ####### Not filled in!

Thanks for reporting, will take a look and make sure these issues are fixed

Hi @marq.rasmussen

In the meanwhile and just for showing the segmentation in rviz or rqt, you can use the script attached in the next post:

Use the following callback (32SC1 to 16UC1) if you have more than 256 instances

def republish_image(image):
    global min_value, max_value
    array = np.frombuffer(image.data, dtype=np.float32).reshape(image.height, image.width, -1)
    min_value = min(np.min(array).item(), min_value) if UPDATE_MIN else min_value
    max_value = max(np.max(array).item(), max_value) if UPDATE_MAX else max_value
    array = np.clip((array - min_value) / (max_value - min_value + sys.float_info.min) * 65535, 0, 65535)
    image.data = array.astype(np.uint16).tobytes()
    image.encoding = "mono16"
    pub.publish(image)

Really appreciate you sharing that script @toni.sm! Thanks again!