Does the RTX Lidar simulate motion distortion for rotational lidar?

As I understand, each frame, the RTX Lidar use a camera, capture the scene and render the distance, azimuth, elevation of some point pattern, into some buffer.
But visualizing the a lidar flying in circle inside a cube, i got a very distorted point cloud as if each point is captured at different moment.

Using the transform returned by the annotator, and transform the points to world, below is y value of one line (expected value is -10 all along the line!):
image

Also, I notice that the “IsaacReadRtxLidarPointData” node, it has the “outputs:transform” and “outputs:transformStart” attributes. Does that mean the Rtx Lidar takes into account the pose of the sensor from previous frame?

Please confirm whether or not, the motion distortion is included in Rtx lidar rendering.

A motion distortion model is not accounted for in RTX Lidar Sensors in the current release.

@phennings sorry but, you mean “not accounted for” as in “not explained in the docs”, or “not implemented at all”?
If the latter case, can you guess what could possibly cause the distortion in my captured image?
Also in other thread, there was mentions about some motion plugins for the omni.sensors extension? What is it if not motion distortion?

Sorry for the late reply. I can only confirm it is not documented.