As I understand, each frame, the RTX Lidar use a camera, capture the scene and render the distance, azimuth, elevation of some point pattern, into some buffer.
But visualizing the a lidar flying in circle inside a cube, i got a very distorted point cloud as if each point is captured at different moment.
Using the transform returned by the annotator, and transform the points to world, below is y value of one line (expected value is -10 all along the line!):
Also, I notice that the “IsaacReadRtxLidarPointData” node, it has the “outputs:transform” and “outputs:transformStart” attributes. Does that mean the Rtx Lidar takes into account the pose of the sensor from previous frame?
Please confirm whether or not, the motion distortion is included in Rtx lidar rendering.