Hi Andrei,
lets reset here, I think I need to go back a step and focus on getting the correct data and then I’ll circle back at a later date for other info.
The specific data I’m looking to record is
RtxSensorCpuIsaacCreateRTXLidarScanBuffer / data
This is the actual data from the lidar head as displayed in the point cloud debugger created by the graph
I can use the following command to do the local / world frame if required later
annotator.initialize(transformPoints=False)
I’m using the following code to generate my lidar head which i have saved in script editor.
import omni.kit.commands
from pxr import Gf
from omni.isaac.core.utils.render_product import create_hydra_texture
import omni.replicator.core as rep
lidar_config = "ZVISION_ML30S"
# 1. Create The Camera
_, sensor = omni.kit.commands.execute(
"IsaacSensorCreateRtxLidar",
path="/sensor",
parent=None,
config=lidar_config,
translation=(0, 0, 1.0),
orientation=Gf.Quatd(1,0,0,0),
)
# 2. Create and Attach a render product to the camera
_, render_product_path = create_hydra_texture([1, 1], sensor.GetPath().pathString)
# 3. Create a Replicator Writer that "writes" points into the scene for debug viewing
writer = rep.writers.get("**RtxLidarDebugDrawPointCloudBuffer**")
writer.attach([render_product_path])
# 4. Create Annotator to read the data from with annotator.get_data()
annotator = rep.AnnotatorRegistry.get_annotator("RtxSensorCpuIsaacCreateRTXLidarScanBuffer")
annotator.attach([render_product_path])
I want to get the RtxSensorCpuIsaacCreateRTXLidarScanBuffer every timestamped tick which is the RTXLidarScanBuffeRtxLidarDebugDrawPointCloudBuffer displayed by the points in the viewport
and also record the translation of an /world/xform every frame as this is my ground truth for training data.
I was reading
https://docs.omniverse.nvidia.com/isaacsim/latest/features/sensors_simulation/isaac_sim_sensors_rtx_based_lidar/annotator_descriptions.html
# Create the annotator.
annotator = rep.AnnotatorRegistry.get_annotator("RtxSensorCpuIsaacCreateRTXLidarScanBuffer")
# Initialize the annotator so it will also output the time stamps.
annotator.initialize(outputTimestamp=True)
# Attach the render product after the annotator is initialized.
annotator.attach([render_product_path])
But I’m still unsure how I go about making the custom writer for these two data types
I take it I will require a custom writer?
I looked at the writer class api
https://docs.omniverse.nvidia.com/py/replicator/1.10.10/source/extensions/omni.replicator.core/docs/API.html#omni.replicator.core.scripts.writers_default.Writer
and
https://docs.omniverse.nvidia.com/extensions/latest/ext_replicator/custom_writer.html
I’m lost at where I have to create “MyCustomWriter” and the parameters path
I looked at
ov\pkg\isaac_sim-2023.1.0-hotfix.1\standalone_examples\replicator\offline_generation\config
Think I’m going off track a little
what I’m really looking to do is figure out how to create the custom writer and get the buffer and xform data recorded I should be able to progess after that point.
I’ve been watching a few of your videos and really learned a lot, thanks!
Scott