For my object detection I want to generate from a lidar sensor 3D pointclouds and annotate them directly. However, I have not seen any opportunity to do so and want to kindly ask if someone can help me out where to start and what steps are required.
Is it anyway possible to write the pointcloud data in a known format that can be further used for training?
Thank you in advance!
In addition, I would like to know how to annotate the pointclouds (3d bounding Box). Can I simply use the available annotator?
Hi Christof, the output of current point cloud annotator is calculated based on the distance from the camera to the object, so you can assume it is like a lidar 3D pointclouds. In terns of annotation, what data would you like to annotate to the points? Pointcloud and 3d bbox are two different annotators, but you can find a mapping using their semantic ids.
Thanks for the response!
Using a lidar sensor, I would like to annotate objects in my pointcloud using bounding boxes.
Unfortunately, it does not seem to work with the currently available annotators. I only got nodes in the omnigraph that are not connected.
Do you have any working example for lidar sensors and 3D bounding boxes?
I got the 3d bounding box annotator to work, but the result is not correct. It seems that it depends on the current viewpoint of the lidar sensor. However, rotating the sensor leads to some bounding boxes.
Is there any implementation for lidar sensors to compute the bounding boxes based on the captured pointcloud?
So you first capture the pointcloud given the prims using lidar sensors, then you place a 3D bounding box annotator to try to capture the 3D bounding box for the points? I don;t think the 3D bounding box annotator can work on pointcloud, but you can first use the 3D bbox annotator on prims to get the data, and then combine it with the pointcloud data to get the bbox of the pointcloud.
Yes, this is what I want to do, but I need to know which objects are hit by the sensor. Is there any way to get that information from the sensor?
Yes, there is a
camera3dPositions, but we haven’t expose it as a annotator. It outputs a shape of
(width, height, 4) array, which the first 3 column are points’ positions in camera space, and 4th column denote whether it is hit by the camera or not. We will expose it as a annotator so you can easily use the info.
And this will also work for lidar sensors?
You need to use the data along with the lidar sensor.
camera3dPositions is giving you a mask that filters out points that are not hit.
Okay, thank you!
Is it already in the latest release or will it be in the next one?
It will be in the next one. I can show you how to create an annotator that outputs this information but that’s pretty complicated.
Depends on the date of the release. Maybe, it is worth that you show me how to do it (I really need it as soon as possible). I really appreciate your support! Thank you
Any help or any news on it?