Question about point cloud

Hi. I want to get a pointcloud, like this way.

(Left: point cloud from simulation camera, Right: simulation camera that is used to get a pointcloud)

as a beginner of the pointcloud, I have few questions.

Q1. I already segmented three object (i.e., Ring, ring holder and plane), can I get the pointcloud directly only for the blue ring?
Q2. How can I calibrate the subscribed topic?I want to adjust the viewpoint of the left image to make left image looks like the right image.
Q3. the left pointcloud is partially observable. can I get a ground-truth pointcloud of the blue ring?
Q4. which is best way to get a pointcloud in terms of realtime/latency using replicator(for stand-alone python script) or OmniGraphNode?

This is my code that using OmniRosNode:

from omni.isaac.core.utils import prims
import omni.graph.core as og
from omni.isaac.core_nodes.scripts.utils import set_target_prims
import numpy as np

    {"graph_path": "/ActionGraph_for_depth_pcl", "evaluator_name": "execution"},
        og.Controller.Keys.CREATE_NODES: [
            ("OnPlaybackTick", "omni.graph.action.OnPlaybackTick"),
            ("IsaacCreateViewport", "omni.isaac.core_nodes.IsaacCreateViewport"),
            ("IsaacGetViewportRenderProduct", "omni.isaac.core_nodes.IsaacGetViewportRenderProduct"),
            ("IsaacSetCamera", "omni.isaac.core_nodes.IsaacSetCameraOnRenderProduct"),
            ("ROS1CameraHelper", "omni.isaac.ros_bridge.ROS1CameraHelper")
        og.Controller.Keys.CONNECT: [
            ("OnPlaybackTick.outputs:tick", "IsaacCreateViewport.inputs:execIn"),
            ("IsaacCreateViewport.outputs:execOut", "IsaacGetViewportRenderProduct.inputs:execIn"),
            ("IsaacCreateViewport.outputs:viewport", "IsaacGetViewportRenderProduct.inputs:viewport"),
            ("IsaacGetViewportRenderProduct.outputs:execOut", "IsaacSetCamera.inputs:execIn"),
            ("IsaacGetViewportRenderProduct.outputs:renderProductPath", "IsaacSetCamera.inputs:renderProductPath"),
            ("IsaacSetCamera.outputs:execOut", "ROS1CameraHelper.inputs:execIn"),
            ("IsaacGetViewportRenderProduct.outputs:renderProductPath", "ROS1CameraHelper.inputs:renderProductPath"),
        og.Controller.Keys.SET_VALUES: [
            ("IsaacCreateViewport.inputs:name", "New Viewport"),
            ("ROS1CameraHelper.inputs:enableSemanticLabels", False),
            ("ROS1CameraHelper.inputs:type", "depth_pcl"),
            ("ROS1CameraHelper.inputs:topicName", "depth_pcl")

camera = prims.create_prim(
    position = [12,0,26],
    orientation = [0.93271,0.36063,0,0],
        "focusDistance": 1,
        "focalLength": 24,
        "horizontalAperture": 20.955,
        "verticalAperture": 15.2908,
        "clippingRange": (0.01, 1000000),
        "clippingPlanes": np.array([1.0, 0.0, 1.0, 1.0]),
set_target_prims(primPath="/ActionGraph_for_depth_pcl/" + "IsaacSetCamera", inputName="inputs:cameraPrim", targetPrimPaths=["/World/Camera_1"])

There are lots of question…
Thanks you!

Hi @PARKBONG - Someone from our team will review the question and get back to you.

1 Like

Hi there,

For 1: you can use the pointcloud annotator and semantically label only the objects of interest, if the "includeUnlabelled"argument stays False it will only include labeled data in the pointcloud:

rep.AnnotatorRegistry.get_annotator("pointcloud", init_params={"includeUnlabelled": False})

NOTE: This is not using ROS but accessing the pointcloud data directly from the annotator.

For 2: In general you can set the [camera parameters]( Cameras — Omniverse Materials and Rendering latest documentation ( to match the ones you are trying to mimic.

For 3: you can use the pointcloud annotator from Q1. It will provide the 3D data in world frame.

For 4: I don’t think there should be any noticeable difference.

1 Like

I have a question about 3,
if I want to get a groundtruth pointcloud, then how should I set render_product?

Here is a simple script to get the pointcloud data (including unlabeled assets) using the script editor:

import asyncio
import omni.replicator.core as rep

rp = rep.create.render_product("/OmniverseKit_Persp", resolution=(10 ,10))
pc_annot = rep.AnnotatorRegistry.get_annotator("pointcloud", init_params={"includeUnlabelled": True})

async def run_example_async():
    await rep.orchestrator.step_async()
    pc_data = pc_annot.get_data()