FIltering LiDAR data based on semantic labels

I want to filter a point cloud obtained via the RtxSensorCpuIsaacCreateRTXLidarScanBuffer annotator based on the objectId of the hit.
Relevant docs that I read:

The meshes in question are set up as follows:

terrain_prim = prims.create_prim(
    prim_path="/background",
    position=(0, 0, 0),
    orientation=euler_angles_to_quat([math.pi / 2, 0, 0]),
    usd_path=str(file_path.parent / "terrain.usd"),
)
UsdPhysics.CollisionAPI.Apply(terrain_prim)

primType = ["Cube", "Sphere"]
stage_obj = omni.usd.get_context().get_stage()
for i in range(2):
    prim = stage_obj.DefinePrim("/World/" + primType[i], primType[i])
    UsdGeom.XformCommonAPI(prim).SetTranslate(
        Gf.Vec3d(*((-1.0, -2.0 + i * 4.0, 0.0) + init_translation))
    )
    UsdGeom.XformCommonAPI(prim).SetScale((1, 1, 1))
    collisionAPI = UsdPhysics.CollisionAPI.Apply(prim)

    # Add semantic label
    sem = Semantics.SemanticsAPI.Apply(prim, "Semantics")
    sem.CreateSemanticTypeAttr()
    sem.CreateSemanticDataAttr()
    sem.GetSemanticTypeAttr().Set("class")
    sem.GetSemanticDataAttr().Set(primType[i])

I set up the sensor like so:

            self.lidar = LidarRtx(
                prim_path=self.sensor_path,
                translation=[-0.32, 0.0, 0.25],
                config_file_name="OS0_128ch10hz512res.json",
            )
            render_product_path = self.lidar.get_render_product_path()

            writer = rep.writers.get("RtxLidar" + "DebugDrawPointCloud")
            writer.attach([render_product_path])

            self.annotator = rep.AnnotatorRegistry.get_annotator(
                "RtxSensorCpuIsaacCreateRTXLidarScanBuffer",
                init_params={
                    "outputObjectId": True,
                }
            )
            self.annotator.attach([render_product_path])

Filtering the collected data I observe that the objectId contains extra points:

data = self.annotator.get_data()
point_cloud_L = torch.from_numpy(
                data["data"]
            )
object_ids = data["objectId"]
unique_object_ids = np.unique(object_ids)
full_prim_paths_to_obj_id = {}
for object_id in unique_object_ids:
    full_prim_path = acquire_syntheticdata_interface().get_uri_from_instance_segmentation_id(object_id)
    full_prim_paths_to_obj_id[full_prim_path] = object_id

mask = (object_ids != full_prim_paths_to_obj_id['/World/Sphere'])
point_cloud_L_filtered = point_cloud_L[mask]

The original point cloud has 35512 points, while the filtered point cloud has 30762 points. However, some points belong to other semantic classes and all points that belong to the ‘/World/Sphere’ primitive are filtered out. Is this the result of a wrong setup?

All of the classes:

{'': 0, '/background/Mesh': 1, '/World/Sphere': 2, '/World/Cube': 3}

My scene:

Original point cloud:
image

The filtered points:
image

Overlaid original (black) and filtered (red) points:
image

@rthaker Hi, sorry for bothering you, but maybe you could take a look? The points and labels coming from the RtxSensorCpuIsaacCreateRTXLidarScanBuffer annotator appear to be not aligned. Is it the expected behavior?

I’m not sure about your filtering, however, I did notice a couple things.

You use the “RtxLidarDebugDrawPointCloud” writer, which does not output the full 360 scan.
You could use the “RtxLidarDebugDrawPointCloudBuffer” writer to see the full scan in the viewport… but if you do make sure to declare your writer AFTER the annotator, or setting outputObjectId=True will not work with initializing the annotator.

You could also try setting keepOnlyPositiveDistance=False on the annotator, and then the number of output points should always be the same.

Also! With the config you use, the the nearRangeM < 0.4, so you must also set
“minDistBetweenEchos”: 0.3,
in the config file… it’s possible that there are points that are between 0.3 and 0.4 that are not being output because of this bug.

@mcarlson1 hey, thanks for reaching out. I tried the following:

  • tweaking “minDistBetweenEchos” to match “nearRangeM”
  • initializing writer after the annotator
  • setting “keepOnlyPositiveDistance” to False

But it didn’t fix the issue: Nx3 points and Nx3 objectId are still not aligned (though objectId contains all expected primitive paths when queried with acquire_syntheticdata_interface().get_uri_from_instance_segmentation_id( object_id ))

Hello,

I am facing the same issue!

while using the get_uri_from_instance_segmentation_id(object_id) function, I am getting “” (empty string) for the prim_path.

When I visualized the points having this object id, I realized that these points pertains to different instances