colorize_semantic_segmentation=False returns zeroed image

Hi all,

Ubuntu 18.04 LTS

I’m trying to create some data with segmentation masks for use in TAO. The default setting of colorize_semantic_segmentation=True returns a properly colorized mask. However, setting it to false returns images with a numpy.unique(mask) = [0]. I’d really like this to work out of the box instead of doing this manually so I’d appreciate any insight. I’m using BasicWriter. I’ve tried with several scripts that produce correct results when colorize is set to True. Just to be sure, I tried the following shapes position randomizer tutorial with the same results

import omni.replicator.core as rep

with rep.new_layer():
    sphere = rep.create.sphere(semantics=[('class', 'sphere')], position=(0, 100, 100))
    cube = rep.create.cube(semantics=[('class', 'cube')],  position=(200, 200 , 100) )
    plane = rep.create.plane(scale=10, visible=True)

    def get_shapes():
        shapes = rep.get.prims(semantics=[('class', 'cube'), ('class', 'sphere')])
        with shapes:
                position=rep.distribution.uniform((-500, 50, -500), (500, 50, 500)),
                rotation=rep.distribution.uniform((0,-180, 0), (0, 180, 0)),
                scale=rep.distribution.normal(1, 0.5)
        return shapes.node


    # Setup Camera
    camera =
    render_product = rep.create.render_product(camera, (512, 512))

    # Setup randomization
    with rep.trigger.on_frame(num_frames=30):
        with camera:
                position=rep.distribution.uniform((0, 1500, 0), (0,1500, 0)), 
    # Initialize and attach writer
    writer = rep.WriterRegistry.get("BasicWriter")



This should be working.

  1. In your example, your camera isn’t looking at any objects, so no semantic info is output. Changing the camera to camera =, 500, 500), look_at=(0,0,0)) gives me correct semantic output. If you have another scene, make sure your camera is looking where it needs to as well

  2. How are you loading the non-colored semantic images? I’ve tested with Pillow and was able to get semantic information

from PIL import Image
import numpy as np

image_path = r"omni.replicator_out\nocolor_semantics\semantic_segmentation_0000.png"

image =

# Out[9]: array([0, 1, 2, 3])
# This matches with the information in semantic_segmentation_labels_0000.json
# {"0": {"class": "BACKGROUND"}, "2": {"class": "cube"}, "1": {"class": "UNLABELLED"}, "3": {"class": "sphere"}}

Yep. That worked I don’t think I was using Pillow before so maybe that was the problem; let’s see if it trains on it now. Thanks!

1 Like