Add_pointcloud_to_frame() API not working for omni.issac.sensor.camera (Isaac-sim 2022.2.0)

It seems that add_pointcloud_to_frame API is not working for Isaac sim 2022.2.0’s new camera sensor class.
I just added the API in the standalone_examples/api/omni.isaac.sensor/camera.py so that the script looks like this.

camera = Camera(
    prim_path="/World/camera",
    position=np.array([0.0, 0.0, 25.0]),
    frequency=20,
    resolution=(256, 256),
    orientation=rot_utils.euler_angles_to_quats(np.array([0, 90, 0]), degrees=True),
)

my_world.scene.add_default_ground_plane()
my_world.reset()
camera.initialize()

i = 0
camera.add_motion_vectors_to_frame()
camera.add_pointcloud_to_frame()

while simulation_app.is_running():
    my_world.step(render=True)
    print(camera.get_current_frame())

and found out that the pointcloud data is always empty.

Could you confirm this behaviour and fix if it is really a bug.

Just realized that this is a duplicate of Isaac Sim | Camera Pointcloud is missing.

Does this document shared in other post help you?3. Cameras — Omniverse Robotics documentation

This seems to be a Isaac 2022.2.0 bug as mentioned in Slack. This issue has been fixed internally and will be included in the next Isaac release.

For now, the temporary fix is to add an arbitrary object with semantic labels:

import omni.replicator.core as rep
cone = rep.create.cone(semantics=[("class", "cone")])

Based on your code, the modified code should look like:

import omni.replicator.core as rep
cone = rep.create.cone(semantics=[("class", "cone")])

camera = Camera(
    prim_path="/World/camera",
    position=np.array([0.0, 0.0, 25.0]),
    frequency=20,
    resolution=(256, 256),
    orientation=rot_utils.euler_angles_to_quats(np.array([0, 90, 0]), degrees=True),
)

my_world.scene.add_default_ground_plane()
my_world.reset()
camera.initialize()

i = 0
camera.add_motion_vectors_to_frame()
camera.add_pointcloud_to_frame()

while simulation_app.is_running():
    my_world.step(render=True)
    print(camera.get_current_frame())

This does not seem to be solved in the latest release 2022.2.1
Following exactly the example, the point cloud is still empty:

'pointcloud': {'data': array([], shape=(0, 3), dtype=float32), 'info': {'pointNormals': array([], dtype=float32), 'pointRgb': array([], dtype=uint8), 'pointSemantic': array([], dtype=uint8)}}

Any news on how to workaround this?

Only containing the point clouds of objects with semantic labels seems to be the correct behavior.

I think you can either label all your objects, or use the depth map and project back to 3D (potentially utilizing project_depth_to_worldspace).

This workaround does not work for me, pointcloud is still empty although I added semantic labels to many prims.
Any other solutions to that bug? I really need to fix this somehow!!!

This issue is fixed in Isaac Sim 2022.2.1.

To generate pointclouds for selected objects with semantics, first modify your object with semantics:

import omni.replicator.core as rep 
semantic_type = "class" 
semantic_label = "cone" # The label you would like to assign to your object
prim_path = "/World/Cone" # the path to your prim object
rep.modify.semantics([(semantic_type, semantic_label)], prim_path) 

Then, create your camera as usual, and:

camera.add_pointcloud_to_frame()

Note, as of 2022.2.1, you also have an option to include ALL the objects in the scene in the pointcloud instead of just those with semantic labels. To do so, instead of the above command, do:

camera.add_pointcloud_to_frame(pointcloud_include_unlabelled=True)

This will include all of the objects in the scene regardless if it has semantics associated with it.

Okay, thank you for the response!

How does it work for LidarRtx? There is no parameter pointcloud_include_unlabelled.

Is it possible to get pointcloud data from this kind of camera and write it to a file?

Best regards,
Christof

@ xuning.yang

Using the LidarRtx class, it seems that the generated omnigraph is not connected:

Could this be the reason that the pointcloud is empty? Is this the intended behaviour? Please give me support for this problem!

However, an error appears whenever I open the action graph:

Do you have an example which I can read the pointclouds in python and use writers for further processing?

Best regards,
Christof

Edit:

Using the following lines of code I can produce the error and the problem of no connected nodes:

_, sensor = omni.kit.commands.execute(
        "IsaacSensorCreateRtxLidar",
        path="/sensor",
        parent=None,
        config="Example_Rotary",
        translation=(0, 0, 1.0),
        orientation=Gf.Quatd(0.5, 0.5, -0.5, -0.5),
    )

_, render_product_path = create_hydra_texture([1, 1], sensor.GetPath().pathString)

template = sensors.get_synthetic_data().activate_node_template(
        "RtxSensorCpu" + "IsaacComputeRTXLidarPointCloud",
        render_product_path_index=0,
        render_product_paths=[render_product_path],
    )

output_directory = '/home/christof/simulator_isaacsim/output_host/'  # "/isaac-sim/tools/output/" 
writer = rep.WriterRegistry.get("BasicWriter")
writer.initialize(
        output_dir=output_directory,
        pointcloud=True,
        pointcloud_include_unlabelled=True
    )

The problem is the writer! → this example produces the error, but when the writer.initialize(..) is removed then there is no problem and the nodes are connected. Something is going wrong in the initialize function.

I did some testing and limited the problem to the parameter pointcloud=True → when this is set then nothing is working in the omnigraph!

Unfortunately, I need this parameter …

I guess it has something to do with the annotation part. As soon as I append any 3D bounding box annotator, the same behaviour occurs. Are there any annotator for lidars?

Hope this helps to describe the problem.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.