No output from Image segmentation

Hello Everyone,
I am trying to use the issac-ros-image-segmentationhttps://nvidia-isaac-ros.github.io/repositories_and_packages/isaac_ros_image_segmentation/isaac_ros_segment_anything/index.html.

@nvidia
However when I facing issues in the Run Launch File part there is no output from segmentation:

  1. I am launching like
    ros2 launch isaac_ros_examples isaac_ros_examples.launch.py launch_fragments:=segment_anything interface_specs_file:=/workspaces/isaac_ros-dev/isaac_ros_assets/isaac_ros_segment_anything/quickstart_interface_specs.json sam_model_repository_paths:=[/workspaces/isaac_ros-dev/isaac_ros_assets/models]

  2. Running a bag file like
    ros2 bag play -l yourbag/yourbag_0.db3 --remap /zed/zed_node/left_raw/camera_info:=/camera_info

I am getting the below output

[component_container_mt-1] [WARN] [1724760009.666062561] [sam_data_encoder_node.NitrosSubscriber]: 60.0s passed while waiting to push a message entity (eid=1061) to the receiver prompt_receiver
[component_container_mt-1] [WARN] [1724760018.512943063] [sam_resize_node.NitrosSubscriber]: 150.0s passed while waiting to push a message entity (eid=322) to the receiver camera_info_in
[component_container_mt-1] [WARN] [1724760025.060794941] [sam_data_encoder_node.NitrosSubscriber]: 70.0s passed while waiting to push a message entity (eid=1061) to the receiver prompt_receiver
[component_container_mt-1] [WARN] [1724760033.911613216] [sam_resize_node.NitrosSubscriber]: 160.0s passed while waiting to push a message entity (eid=322) to the receiver camera_info_in
[component_container_mt-1] [WARN] [1724760040.460711157] [sam_data_encoder_node.NitrosSubscriber]: 80.0s passed while waiting to push a message entity (eid=1061) to the receiver prompt_receiver
[component_container_mt-1] [WARN] [1724760049.302374285] [sam_resize_node.NitrosSubscriber]: 170.0s passed while waiting to push a message entity (eid=322) to the receiver camera_info_in
[component_container_mt-1] [WARN] [1724760055.861622897] [sam_data_encoder_node.NitrosSubscriber]: 90.0s passed while waiting to push a message entity (eid=1061) to the receiver prompt_receiver
[component_container_mt-1] [WARN] [1724760064.691150641] [sam_resize_node.NitrosSubscriber]: 180.0s passed while waiting to push a message entity (eid=322) to the receiver camera_info_in
[component_container_mt-1] [WARN] [1724760071.255759039] [sam_data_encoder_node.NitrosSubscriber]: 100.0s passed while waiting to push a message entity (eid=1061) to the receiver prompt_receiver
[component_container_mt-1] [WARN] [1724760080.078673048] [sam_resize_node.NitrosSubscriber]: 190.0s passed while waiting to push a message entity (eid=322) to the receiver camera_info_in
[component_container_mt-1] [WARN] [1724760086.645673726] [sam_data_encoder_node.NitrosSubscriber]: 110.0s passed while waiting to push a message entity (eid=1061) to the receiver prompt_receiver

Image topic data
ros2 topic echo /image
header:
stamp:
sec: 0
nanosec: 0
frame_id: ‘’
height: 720
width: 1280
encoding: bgr8
is_bigendian: 0
step: 3840
data:

  • 136
  • 132
  • 161
  • 136
  • 131
  • 159
  • 140
  • 134
  • 166
  • 138
  • 132
  • 163
  • 137
  • 132
  • 163
  • 138
  • 133
  • 164
  • 141
  • 136
  • 167
  • 140
  • 135
  • 166
  • 141
  • 136
  • 167
  • 141
  • 136
  • 166
  • 138
  • 131
  • 165
  • 141
  • 134
  • 167
  • 143
  • 139
  • 172
  • 141
  • 136
  • 169
  • 145
  • 140
  • 173
  • 145
  • 140
  • 173
  • 145
  • 140
  • 173
  • 142
  • 137
  • 170
  • 145
  • 151
  • 144
  • 178
  • 150
  • 143
  • 178
  • 149
  • 144
  • 174
  • 148
  • 143
  • 174
  • 151
  • 146
  • 177
  • 146
  • 141
  • 173
  • 148
  • 142
  • ‘…’

ros2 topic echo /camera_info --once
header:
stamp:
sec: 1724748984
nanosec: 419116174
frame_id: zed_left_camera_optical_frame
height: 720
width: 1280
distortion_model: rational_polynomial
d:

  • -0.6160699725151062
  • 1.8807599544525146
  • 0.00021522599854506552
  • -8.745469676796347e-05
  • 0.21952299773693085
  • -0.5077660083770752
  • 1.7325299978256226
  • 0.37699800729751587
    k:
  • 535.1500244140625
  • 0.0
  • 641.530029296875
  • 0.0
  • 535.5
  • 368.0954895019531
  • 0.0
  • 0.0
  • 1.0
    r:
  • 1.0
  • 0.0
  • 0.0
  • 0.0
  • 1.0
  • 0.0
  • 0.0
  • 0.0
  • 1.0
    p:
  • 535.1500244140625
  • 0.0
  • 641.530029296875
  • 0.0
  • 0.0
  • 535.5
  • 368.0954895019531
  • 0.0
  • 0.0
  • 0.0
  • 1.0
  • 0.0
    binning_x: 0
    binning_y: 0
    roi:
    x_offset: 0
    y_offset: 0
    height: 0
    width: 0
    do_rectify: false

Please help

Hi @anishm1

Welcome to the Isaac ROS forum.

Looking the output you posted, the rosbag is properly streaming the output

I suggest to restart your demo following this order:

  1. Run first the visualization tool
ros2 run rqt_image_view rqt_image_view
  1. Run the launching file
cd ${ISAAC_ROS_WS} && \
   ros2 launch isaac_ros_examples isaac_ros_examples.launch.py launch_fragments:=segment_anything interface_specs_file:=${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_segment_anything/quickstart_interface_specs.json sam_model_repository_paths:=[${ISAAC_ROS_WS}/isaac_ros_assets/models]
  1. Execute the rosbag file
ros2 bag play -l ${ISAAC_ROS_WS}/isaac_ros_assets/isaac_ros_segment_anything/segment_anything_sample_data/

Let me know if fix your issue

Best,
Raffaello

Hi @Raffaello
Thank you for you reply. Still I am not getting any output of segmentation in the /segment_anything/colored_segmentation_mask topic. I followed the order which you said but still the same error I am facing error.
For your information I have recorded my bag file with zed2i camera and remapping to respective topic to /image topic and /camera_info topic for that I have created a code named converter.py I will attach that code as well which will publish in the rate of 10hz.

The bag file which I am running does not publish /detectnet/detections topic

**What I am running in the terminal **

The bag file is a video recorded in the zed2i camera which has image like this

Python Converter.py code
import rclpy
from rclpy.node import Node
from sensor_msgs.msg import Image, CameraInfo
from cv_bridge import CvBridge, CvBridgeError
import cv2

class ImageConverter(Node):
def init(self):
super().init(‘image_converter’)

    # Subscribe to the original image topic
    self.image_subscription = self.create_subscription(
        Image,
        '/zed/zed_node/left_raw/image_raw_color',
        self.image_callback,
        10)
    
    # Subscribe to the camera info topic
    self.camera_info_subscription = self.create_subscription(
        CameraInfo,
        '/zed/zed_node/left_raw/camera_info',
        self.camera_info_callback,
        10)
    
    # Publisher for resized image and camera info
    self.image_publisher = self.create_publisher(Image, '/image', 10)
    self.camera_info_publisher = self.create_publisher(CameraInfo, '/camera_info', 10)

    self.bridge = CvBridge()
    self.camera_info = None
    self.latest_image = None

    # Create a timer to publish at 10 Hz
    self.timer = self.create_timer(0.1, self.timer_callback)  # 0.1 seconds = 10 Hz

def image_callback(self, msg):
    try:
        # Convert the ROS Image message to OpenCV format
        cv_image = self.bridge.imgmsg_to_cv2(msg, desired_encoding='bgr8')
        
        # Resize the image to the desired width and height
        resized_image = cv2.resize(cv_image, (1200, 632))
        
        # Store the resized image for publishing in the timer callback
        self.latest_image = self.bridge.cv2_to_imgmsg(resized_image, encoding='bgr8')
    except CvBridgeError as e:
        self.get_logger().error(f'Error converting image: {e}')

def camera_info_callback(self, msg):
    # Store the original camera info and scale parameters
    self.camera_info = msg

def timer_callback(self):
    if self.latest_image and self.camera_info:
        # Publish the latest resized image
        self.image_publisher.publish(self.latest_image)

        # Adjust camera info for resizing
        original_height = self.camera_info.height
        original_width = self.camera_info.width
        scale_x = 1200 / original_width
        scale_y = 632 / original_height

        resized_camera_info = CameraInfo()
        resized_camera_info.header = self.camera_info.header
        resized_camera_info.header.stamp = self.get_clock().now().to_msg()  # Update timestamp
        resized_camera_info.height = 632
        resized_camera_info.width = 1200
        resized_camera_info.distortion_model = self.camera_info.distortion_model
        resized_camera_info.d = self.camera_info.d

        # Adjusting the K matrix (intrinsic parameters)
        resized_camera_info.k[0] = self.camera_info.k[0] * scale_x  # fx
        resized_camera_info.k[2] = self.camera_info.k[2] * scale_x  # cx
        resized_camera_info.k[4] = self.camera_info.k[4] * scale_y  # fy
        resized_camera_info.k[5] = self.camera_info.k[5] * scale_y  # cy
        resized_camera_info.k[8] = 1

        # Adjusting the P matrix (projection matrix)
        resized_camera_info.p[0] = self.camera_info.p[0] * scale_x  # fx
        resized_camera_info.p[2] = self.camera_info.p[2] * scale_x  # cx
        resized_camera_info.p[5] = self.camera_info.p[5] * scale_y  # fy
        resized_camera_info.p[6] = self.camera_info.p[6] * scale_y  # cy
        resized_camera_info.p[10] = 1

        resized_camera_info.r = self.camera_info.r
        resized_camera_info.binning_x = self.camera_info.binning_x
        resized_camera_info.binning_y = self.camera_info.binning_y
        resized_camera_info.roi = self.camera_info.roi

        # Publish the resized camera info
        self.camera_info_publisher.publish(resized_camera_info)

def main(args=None):
rclpy.init(args=args)
image_converter = ImageConverter()
rclpy.spin(image_converter)
image_converter.destroy_node()
rclpy.shutdown()

if name == ‘main’:
main()