Hello there,
I am planning to create an environment to virtually validate perception algorithms for a mobile robot using camera and lidar.
Some of the lidar perception algorithms my team developed make use of specific features like intensity values of the points as well as the structure of the point cloud in the ROS messages (row-by-row structure like in the ROS messages obtained by Ouster rotating lidars) and the fact that a point cloud has a fixed number of points (even when the laser was not reflected back, the point gets set to 0,0,0 in the message).
According to my colleague who is working with the synthetic data I created using Isaac Sim, the point clouds obtained from the real sensor and Isaac Sim differ in the way that:
- There are no intensity values present for the points
- The point cloud in the ROS pointcloud2 message is structured column-wise instead of row-wise
- The pointcloud size differes (it contains only points that were reflected)
As a fourth point, real point clouds contain a certain amount of noise, which the Sim clouds have not.
My question is:
Are there ways to modify the lidars or the nodes in the action graphs in the way to get point clouds that match the “real” data with respect to the mentioned four points?
Any tips or guidance is very appreciated!
Best regards
Kevin