Hi. I want to get a pointcloud, like this way.
(Left: point cloud from simulation camera, Right: simulation camera that is used to get a pointcloud)
as a beginner of the pointcloud, I have few questions.
Q1. I already segmented three object (i.e., Ring, ring holder and plane), can I get the pointcloud directly only for the blue ring?
Q2. How can I calibrate the subscribed topic?I want to adjust the viewpoint of the left image to make left image looks like the right image.
Q3. the left pointcloud is partially observable. can I get a ground-truth pointcloud of the blue ring?
Q4. which is best way to get a pointcloud in terms of realtime/latency using replicator
(for stand-alone python script) or OmniGraphNode
?
This is my code that using OmniRosNode:
from omni.isaac.core.utils import prims
import omni.graph.core as og
from omni.isaac.core_nodes.scripts.utils import set_target_prims
import numpy as np
og.Controller.edit(
{"graph_path": "/ActionGraph_for_depth_pcl", "evaluator_name": "execution"},
{
og.Controller.Keys.CREATE_NODES: [
("OnPlaybackTick", "omni.graph.action.OnPlaybackTick"),
("IsaacCreateViewport", "omni.isaac.core_nodes.IsaacCreateViewport"),
("IsaacGetViewportRenderProduct", "omni.isaac.core_nodes.IsaacGetViewportRenderProduct"),
("IsaacSetCamera", "omni.isaac.core_nodes.IsaacSetCameraOnRenderProduct"),
("ROS1CameraHelper", "omni.isaac.ros_bridge.ROS1CameraHelper")
],
og.Controller.Keys.CONNECT: [
("OnPlaybackTick.outputs:tick", "IsaacCreateViewport.inputs:execIn"),
("IsaacCreateViewport.outputs:execOut", "IsaacGetViewportRenderProduct.inputs:execIn"),
("IsaacCreateViewport.outputs:viewport", "IsaacGetViewportRenderProduct.inputs:viewport"),
("IsaacGetViewportRenderProduct.outputs:execOut", "IsaacSetCamera.inputs:execIn"),
("IsaacGetViewportRenderProduct.outputs:renderProductPath", "IsaacSetCamera.inputs:renderProductPath"),
("IsaacSetCamera.outputs:execOut", "ROS1CameraHelper.inputs:execIn"),
("IsaacGetViewportRenderProduct.outputs:renderProductPath", "ROS1CameraHelper.inputs:renderProductPath"),
],
og.Controller.Keys.SET_VALUES: [
("IsaacCreateViewport.inputs:name", "New Viewport"),
("ROS1CameraHelper.inputs:enableSemanticLabels", False),
("ROS1CameraHelper.inputs:type", "depth_pcl"),
("ROS1CameraHelper.inputs:topicName", "depth_pcl")
],
},
)
camera = prims.create_prim(
prim_path="/World/Camera_1",
prim_type="Camera",
position = [12,0,26],
orientation = [0.93271,0.36063,0,0],
attributes={
"focusDistance": 1,
"focalLength": 24,
"horizontalAperture": 20.955,
"verticalAperture": 15.2908,
"clippingRange": (0.01, 1000000),
"clippingPlanes": np.array([1.0, 0.0, 1.0, 1.0]),
},
)
set_target_prims(primPath="/ActionGraph_for_depth_pcl/" + "IsaacSetCamera", inputName="inputs:cameraPrim", targetPrimPaths=["/World/Camera_1"])
There are lots of question…
Thanks you!