LiDAR self-calibration with custom LiDAR: calibration stays at 0% / Not Accepted

DRIVE OS Version: 6.0.10

Issue Description:

Hi everyone,

I’m trying to use the DriveWorks LiDAR self-calibration sample with a Hesai AT128P (custom LiDAR integration), and I’m currently stuck with the calibration status never progressing.


Goal / setup

I want to run the LiDAR self-calibration tool for a Hesai AT128P (custom LiDAR), i.e. use the calibration engine/routine with a LiDAR type that is not natively supported by the calibration module.

For that, I had to adapt the NVIDIA sample code. One adaption is that I had to spoof the LiDAR device string inside the m_lidarPropertiesCalib (dwLidarProperties m_lidarPropertiesCalib{}) (See below for the code).

Also, I use CAN-only egomotion (DW_EGOMOTION_ODOMETRY), no IMU yet.


What already works

Other samples/tools but the calibration sample:

  • Replay / recording (original NVIDIA Driveworks samples) pipeline works
  • Point cloud processing works (I adapted the pointcloudprocessing sample)

Inside this customized calibration sample:

  • Egomotion works (CAN-only, no IMU yet)
    • DW_EGOMOTION_ODOMETRY
    • I feed VehicleIO states from CAN
  • I can compute and use egomotion transforms (not the root issue here)

Observed problem

  • m_updatedLidarToRig remains equal to the nominal transform (m_lidarInfo.nominalSensorToRig)
  • therefore m_correctionR and m_correctionT also remain zero/identity (expected consequence)
  • more importantly: calibration status (dwCalibrationStatus m_status{}) stays at 0.00% / “Not Accepted” the whole time

So I suspect the problem is earlier in the calibration pipeline (initialization / calibration routine setup), not just output interpretation.


Suspected area: calibration routine initialization (dwCalibrationEngine_initializeLidar)

This part succeeds (no error code), but I suspect the routine is internally not configured correctly for my custom LiDAR.

Calibration engine init (no error) (exactly like in the original LiDAR self-calibration sample from DriveWorks!)

CHECK_DW_ERROR_MSG(dwCalibrationEngine_initialize(&m_calibEngine, m_rigConfig, m_context),
                   "Error: Initialize calibration engine failed.");

Calibration routine init (with LiDAR spoofing)

Originally this failed because the calibration module did not support the Hesai LiDAR type.
To work around this, I spoofed the LiDAR device string to a supported Velodyne type:

std::strncpy(reinterpret_cast<char*>(m_lidarPropertiesCalib.deviceString),
             "VELO_HDL64E"
             sizeof(m_lidarPropertiesCalib.deviceString) - 1);
m_lidarPropertiesCalib.deviceString[sizeof(m_lidarPropertiesCalib.deviceString) - 1] = '\0';

dwCalibrationLidarParams lidarCalibrationParams{};
lidarCalibrationParams.lidarProperties = &m_lidarPropertiesCalib;

dwCalibrationEngine_initializeLidar(&m_calibRoutine,
                                    m_lidarInfo.sensorId,
                                    m_canIndex,
                                    &lidarCalibrationParams,
                                    cudaStreamDefault,
                                    m_calibEngine)

I printed the content of dwLidarProperties (the struct passed into dwCalibrationLidarParams::lidarProperties) and noticed clear discrepancies compared to what I expect for my Hesai AT128P recording. That raises the question whether the calibration routine (m_calibRoutine) is initialized successfully but with incorrect LiDAR model assumptions, causing calibration to never converge / never progress.

Observed vs. expected fields of dwLidarPropeties
Field (dwLidarProperties) Observed (logged) Expected (Hesai AT128P, Single Return @10 Hz) Assessment / Comment
deviceString VELO_HDL64E (spoofed!) VELO_HDL64E (spoofed!) Intentionally spoofed
spinFrequency 10 Hz 10 Hz ✅ matches
numberOfRows 128 128 ✅ matches
pointsPerPacket 256 256 ✅ matches (2 blocks × 128 channels)
pointsPerSpin 320000 153600 ❌ much too high (~2.08×)
pointsPerSecond 3200000 1536000 ❌ much too high (~2.08×)
packetsPerSpin 1250 600 ❌ much too high (~2.08×)
packetsPerSecond 12 6000
horizontalFOVStart 0.5 rad convention-dependent
horizontalFOVEnd 2.8 rad convention-dependent
Horizontal FOV Span (End - Start) 2.3 rad ≈ 131.8° about 120° (slightly more in practice)
verticalFOVStart -0.216697 rad (≈ -12.42°) about -12.47° (≈ -0.21764 rad) ✅ very close
verticalFOVEnd 0.224673 rad (≈ +12.87°) about +12.93° (≈ 0.22567 rad) ✅ very close
validAuxInfos 0x0 unknown / mode-dependent ⚠️
lidarSSISizeInBytes 0 unknown / sensor-dependent ⚠️
availableReturns ANY unknown / mode-depent ⚠️
horizontalAngles[] min/max 0 / 0 delta should be around 0,1 °
verticalAngles[] min/max 0 / 0 -12.42°/12.87 ° would normally be expected

Questions

  1. What exactly from the rig is used by dwCalibrationEngine_initialize(&m_calibEngine, m_rigConfig, m_context)
  2. For the LiDAR self-calibration routine:
    • Which fields of dwLidarProperties are actually relevant for calibration?
    • If the fields of the dwLidarProperties are no all correct, can that cause the calibration status to stay at 0% (Not Accepted) without explicit errors?
  3. Can the fields of dwLidarProperties be overridden/set manually and if so, how?
  4. Or is it more likely, that the issue is somewhere elese?

Dear @Jis ,
The dwCalibrationEngine_initialize() only stores the rig; it does not read specific fields at init.

Which fields of dwLidarProperties are actually relevant** for calibration?

It needs deviceString, pointsPerSpin, spinFrequency

If the fields of the dwLidarProperties are no all correct, can that cause the calibration status to stay at 0% (Not Accepted) without explicit errors?

Yes.

Let me check if using Velodyne device string is causing this issue and update you

Thank you for that information. Unfortunately I still don’t get how to set them. I rechecked the origional code (from sample_calibration_lidar). This is all I find inside the code about dwLidarProperties:

Relevant Code for dwLidarProperties (from original NVIDIA SDK Calibration Sample)
class LidarSelfCalibrationSample : public DriveWorksSample
{
    ...
    dwLidarProperties m_lidarProperties{};
    ...
}
bool onInitialize() override
    {
        // -----------------------------------------
        // Initialize lidar accumulator
        // -----------------------------------------
        {
         ...
            dwPointCloudAccumulator_initialize(&m_accumulator, &params, &m_lidarProperties, m_context)
        ...
        }

        ...
        // -----------------------------------------
        // Initialize Lidar Self-Calibration
        // -----------------------------------------
        {
            dwCalibrationLidarParams lidarCalibrationParams{};
            lidarCalibrationParams.lidarProperties = &m_lidarProperties;
            dwCalibrationEngine_initializeLidar(&m_calibRoutine, m_lidarInfo.sensorId, m_canIndex, &lidarCalibrationParams, cudaStreamDefault, m_calibEngine)
        ...
        }
void initializeSensorManager()
    {
        dwSensorManagerParams smParams{};
        m_lidarIndex = getSensorIndex(DW_SENSOR_LIDAR, getArgument("lidar-sensor"));
        smParams.enableSensors[smParams.numEnableSensors++] = m_lidarIndex;
        ...
        dwSensorManager_initializeFromRigWithParams(&m_sensorManager, m_rigConfig, &smParams, 1024, m_sal)
        dwSensorManager_getSensorHandle(&lidarSensor, m_lidarIndex, m_sensorManager)
        dwSensorLidar_getProperties(&m_lidarProperties, lidarSensor)
        ...
    }

→ So the properties are read out with *_getProperties with help of lidarSensor, which is again set by m_sensorManager. From this I deduce that the information is obtained by the sensorManager, which is set by the rig and m_sal/m_context. So I checked the rigfile of the calibration sample:

And this is the rig-part of the lidar:

Rig (from NVIDIA SDK)
 {
                "correction_rig_T": [
                    0.0,
                    0.0,
                    0.0
                ],
                "correction_sensor_R_FLU": {
                    "roll-pitch-yaw": [
                        0.0,
                        0.0,
                        0.0
                    ]
                },
                "name": "lidar:top:hdl64e",
                "nominalSensor2Rig_FLU": {
                    "roll-pitch-yaw": [
                        2.0,
                        -3.0,
                        -93.0
                    ],
                    "t": [
                        1.06643998622894,
                        -0.105389997363091,
                        1.94612005233765
                    ]
                },
                "parameter": "file=lidar_velodyne64.bin",
                "properties": null,
                "protocol": "lidar.virtual"
            }

→ There is no information about the missing fields

Also, here is the lidar-part of my rig:

My Rig
{
        "name": "lidar:front:center",
        "nominalSensor2Rig": {
          "roll-pitch-yaw": [
            0.0,
            0.0,
            0.0
          ],
          "t": [
            3.5775,
            0.0,
            0.5845
          ]
        },
        "parameter": "file=.../lidar_front_center.bin,device=CUSTOM_EX, decoder-path=.../libplugin_lidar_hesai_x86_64.so,lidar_type=AT128E2X,correction_file=.../custom_angle_correction_frontcenter201.dat, scan-frequency=10.0",
        "properties": null,
        "protocol": "lidar.virtual",
        "sensor2Rig": {
          "roll-pitch-yaw": [
            0.0,
            0.0,
            0.0
          ],
          "t": [
            3.5775,
            0.0,
            0.5845
          ]
        }
      }

Following / Still open questions:

In my understanding, this means that the mentioned field pointsPerSpin is read by the sensormanger and not set manually. So how to I override them / set them maually?

Also, exactly those fields are less problematic: deviceString is set (spoofing), spinFrequencyis correct, pointsPerSpin is twice as high but I am guessing this is just used for the capacity, so I think as long as it is not too low it shouldn’t be a problem?


Update 1 : Overwriting m_lidarPropertiesCalib doesn’t solve the issue

I manually overwrote the fields of m_lidarPropertiesCalib (before the initialization), so the following fields are correct (correct means as in the ideal situation) now:

  • Spin Frequency
  • numberofRows
  • packetsPerSecond
  • packetsPerSpin
  • pointsPerSecond
  • PoinsPerSpin
    like this:
m_lidarPropertiesCalib = m_lidarProperties;
...
// Manually override clearly wrong/inconsistent fields (AT128P, Single Return, 10 Hz)
m_lidarPropertiesCalib.spinFrequency    = 10.0f;
m_lidarPropertiesCalib.numberOfRows     = 128;

m_lidarPropertiesCalib.packetsPerSecond = 6000;
m_lidarPropertiesCalib.packetsPerSpin   = 600;

m_lidarPropertiesCalib.pointsPerSecond  = 1536000;
m_lidarPropertiesCalib.pointsPerSpin    = 153600;

dwCalibrationLidarParams lidarCalibrationParams{};
lidarCalibrationParams.lidarProperties = &m_lidarPropertiesCalib;

But the calibration still doesn’t work.


Update 2: Timing of LiDAR & CAN is good (just to rule out errors here)

I checked the timestamps of the LiDAR packets, CAN messages and the sweep itself: Within each LiDAR sweep, the egomotion estimates and LiDAR packets arrive in the expected temporal order and are consistently time-aligned. In other words, during the sweep window there is no unexpected timing mismatch between egomotion and LiDAR data.


Update 3: Output if --verbose=1

I don’t see any problems here, but here’s the output of one sweep if --verbose=1:

Adding current ICP pose to calibration engine:
	  1.000000000000	 -0.000174980523	 -0.000114009468	  0.173969864845
	  0.000174967136	  1.000000000000	 -0.000117421761	 -0.003303708509
	  0.000114030016	  0.000117401811	  1.000000000000	  0.007323410828
	  0.000000000000	  0.000000000000	  0.000000000000	  1.000000000000

Adding Ego-Motion pose to calibration engine:
	  1.000000000000	 -0.000007677823	  0.000000000000	  0.174665004015
	  0.000007677823	  1.000000000000	 -0.000000000000	  0.000000670087
	  0.000000000000	  0.000000000000	  1.000000000000	  0.000000000000
	  0.000000000000	  0.000000000000	  0.000000000000	  1.000000000000

Adding 80000 Lidar points to calibration engine.

@SivaRamaKrishnaNV Do you have any update for this topic? Maybe another idea what could cause this or even how to fix it?