Semantic segmentation fails if multi GPU

In the following code snippet, segmentation data is empty if the system has multiple GPUs but works as expected if there is a single.

        import carb
        import omni.replicator.core as rep
        from omni.isaac.core.prims import XFormPrim
        from omni.isaac.core.prims import GeometryPrim
        from omni.isaac.core.utils.stage import add_reference_to_stage

        # Load object
        obj_prim_path = "/World/obj"
        obj_name = "006_mustard_bottle"
        add_reference_to_stage("006_mustard_bottle.usd", obj_prim_path)

        self._world.scene.add(
            GeometryPrim(
                obj_prim_path,
                name=obj_name,
                translation=[-65, 0, 10],
                orientation=[0.707, -0.707, 0, 0],
            )
        )
        rep.modify.semantics([("class", obj_name)], obj_prim_path)

        # Load camera
        cam_prim_path = "/World/main"
        self._world.stage.DefinePrim(cam_prim_path, "Camera")

        cam_prim = self._world.stage.GetPrimAtPath(cam_prim_path)
        cam_prim.GetAttribute("focalLength").Set(10)
        cam_prim.GetAttribute("clippingRange").Set((0.01, 1000000))
        cam_prim.GetAttribute("clippingPlanes").Set(np.array([1.0, 0.0, 1.0, 1.0]))

        renderer = rep.create.render_product(cam_prim_path, (640, 480))

        camera_xform = XFormPrim(cam_prim_path, name="main",  translation=(-100, -15, 24), orientation= (0.5, 0.5, -0.5, -0.5))
        self._world.scene.add(camera_xform)

        semantic_annotator = rep.AnnotatorRegistry.get_annotator(
            "semantic_segmentation", init_params={"semanticTypes": ["class"]}
        )
        semantic_annotator.attach([renderer])

        await rep.orchestrator.step_async()
        semantic_data = semantic_annotator.get_data()

Is this a bug, or something I am missing?

Sorry for the late reply. Are you able to drop a log up here?

@pcallender Hi, I had this exact same issue, when trying to generate semantic/instance segmentation masks using the basic Hello World script found here Visualizing the output folder from Basic Writer() — Omniverse Extensions documentation

No segmentation masks were generated, even though RGBD/Normals outputs got generated by the writer.

The fix was to run export CUDA_VISIBLE_DEVICES=2 before running the script. Without this, for some reason, segmentation mask generation doesn’t seem to work when multiple GPUs are visible.

Also, note that the visible GPU must be set to the GPU that is connected to the display monitor. If any other GPU is exported, the script crashes with a segfault.