Semantic Imagery in ROS

I applied semantic labels to my scene as described here: 1. Applying and Visualizing Semantic Data — Omniverse Robotics documentation

This works well and when i use the visualizer tool, i can see the labels in the semantic segmentation window. But when I use a ROS camera and i check the segmentationEnabled box, i dont see any segmentation images in ROS, the RGB and depth images work however.

How are you verifying the segmentation image in ROS?
in rviz or via rostopic echo?

rqt
the topic does show up, but it stays black.
The other image, /rgb and /depth work fine
I can check if its only registering the topic or also publishing black images

it publishes images, but they are just black, while the visualizer shows me colorful semantic images (raw and parsed)

rostopic hz /semantic
subscribed to [/semantic]
average rate: 22.227
min: 0.039s max: 0.048s std dev: 0.00207s window: 22

The visualizer randomly colors the classes, while the image marks each pixel’s class value starting at zero. Which comes out looking black, the image should still contain data if you rostopic echo.

is there a way to initialize it like the visualizer?

Not currently, you would have to do that as a post processing step in an external ROS node. The colors have no meaning in general, the image only contains the class ID’s. And the labels to convert the class name from the class id are published on a separate topic.

We are working on making adding post processing steps before publishing images to ROS possible for a future release.

1 Like

ok, i will test that