Range image in modified pointcloudprocessing sample always black (depth = 0)

DRIVE OS Version:

  • Drive OS 6.0.10, DriveWorks 5.20.24
  • Hardware: Drive Orin P3710

Environment/Setup:

  • Base sample: sample_pointcloudprocessing
  • Sensors:
    • 4× Hesai AT128 LiDARs via custom lidar.custom plugin
    • CAN-based egomotion (VW DBC, live CAN via can.socket)
  • Rig: custom JSON with 4 LiDARs around the vehicle (approx. 360° coverage around origin), egomotion frame configured

The original, unmodified sample_pointcloudprocessing works on my system: I can see both the fused point cloud and the range image (bottom-right “Rendered range image” tile) as expected. It looks like all points are being rejected or mapped to zero depth inside the range image creator, before any GL/visualization step.

Short Version / Problem Description:

I modified the sample_pointcloudprocessing so I can use it with my setup (see “What I changed”). Everything works so far (see “What works”) except the range image. It stays black. The content of the relevant variables (m_stitchedDepthImageHost, m_stitchedDepthMap3D and m_stitchedDepthImage) are all finite, but only with '0’s, so the range is supposedly 0 for all points (see “What does not work”). As I said, the rest works - so m_stitchedPoints is filled with rational values (calcutlates ranges of 3D-points between 0.5 m and 200 m). The clipping shouldn’t be the problem (see “Things I’ve tried so far”). I don’t know what else could be the reason nor what to do/try next.

Details:

What I changed

Compared to the original sample, I tried to keep changes minimal:

  • Use live LiDAR data from 4 Hesai AT128 sensors instead of recorded data
  • Use a custom rig (4 LiDARs around the car, approx. 360° coverage) instead of the original rig
  • Use CAN + custom DBC for egomotion
  • Point cloud fusion / stitching is still done the same way as in the sample (result in m_stitchedPoints)
  • All range image related code (initialization, params, binding, process, rendering) is identical in structure to the sample

Example (unchanged pattern):

dwPointCloudRangeImageCreatorParams params{};
CHECK_DW_ERROR(dwPCRangeImageCreator_getDefaultParams(&params));

// I only adjusted clipping later, but I also tested with pure defaults.

// Later:
CHECK_DW_ERROR(dwPCRangeImageCreator_bindInput(&m_stitchedPoints, m_rangeImageCreator));
CHECK_DW_ERROR(dwPCRangeImageCreator_bindPointCloudOutput(&m_stitchedDepthMap3D, m_rangeImageCreator));
CHECK_DW_ERROR(dwPCRangeImageCreator_bindOutput(m_stitchedDepthImage, m_rangeImageCreator));

// In processing:
CHECK_DW_ERROR(dwPCRangeImageCreator_process(m_rangeImageCreator));
What works
  • All 4 Hesai LiDARs are running and decoding via the custom plugin
  • The stitched / fused point cloud (m_stitchedPoints) is rendered and looks plausible:
    • ~400k points per spin
    • approx. 360° coverage around the vehicle
    • Radii and heights are in a reasonable range (e.g. minR ~0.6 m, maxR ~50–200 m, z in approx [-1.3 m, 10 m]).
  • The RenderEngine + ImageStreamerGL setup is fine:
    • If I replace the range-image-based drawing with a simple gradient test in the RGBA image, I see a left-to-right black → white gradient in the “Rendered range image” tile.
      → So the OpenGL pipeline and the GUI rendering are OK.
What does *not* work
  • The bottom-right range image tile always stays completely black. (There is no DriveWorks error from any of the dwPCRangeImageCreator_* calls or any other functions/callings)

I added some debug prints around thedwPointCloudRangeImageCreator outputs, so I know that:

  • the depth-map point cloud buffer m_stitchedDepthMap3D has the correct size (given width*height), but all points are (0,0,0) with range 0.
  • same with the CUDA depth image (m_stitchedDepthImage) and the CPU-copy of it (m_stitchedDepthImageHost)
    DepthMap3D: size=524288
    P[0..9] = (0,0,0) r=0
    DepthMap3D: finite(first 10)=10 minR=0 maxR=0 # --> the first 10 points are finite with a range of 0 
    
    CUDA DepthImage: w=4096 h=128 finite=524288 min=0 max=0
    Range image: valid (>0) pixels = 0 # --> all pixels are finite, but 0
    
  • But as I already said, the stitched input cloud stats (directly from m_stitchedPoints) looks good:
    DEBUG stitched stats: size=395902 finite=395902 minR=1.57123 maxR=51.4837 minZ=-1.33423 maxZ=9.8342
    # Similar ranges for other spins; the fused cloud rendering looks good.
    
Things I've tried so far
  • as I wrote in “What does not work”, I added some debug prints

  • as I wrote in “What does work”, I replaced the range-image-based drawing with a simple gradient test to see if the RenderEngine + ImageStreamerGL setup if fine → that Test worked, so the OpenGL pipeline and the GUI rendering are OK

  • I tried explicitly “opening up” clipping:

    Details of the opening
    params.clippingParams.minElevationRadians = -pi / 2.0;
    params.clippingParams.maxElevationRadians =  pi / 2.0;
    
    params.clippingParams.orientedBoundingBox.center      = {0.f, 0.f, 0.f};
    params.clippingParams.orientedBoundingBox.rotation    = DW_IDENTITY_MATRIX3F;
    params.clippingParams.orientedBoundingBox.halfAxisXYZ = {1000.f, 1000.f, 1000.f};
    

Even with this “huge OBB” and ±90° elevation, I still get:

  • DepthMap3D all zeros
  • CUDA depth image all zeros
  • Range image tile black

So it looks like all points are being rejected or mapped to zero depth inside the range image creator, before any GL/visualization step.

Extra content:
  • Right know the CAN-data (for egomotion) is hard coded at some point, I don*t use the CAN signals yet.
  • And also CAN warnings like sensor lagging and large DW_SENSOR_STATE_DELTA_HOST_AND_SENSOR_TIME (~1.7e15 us).
  • However, ICP and egomotion should mainly affect the stitched cloud / ego compensation between spins. The stitched cloud itself looks fine, and the range image for a single spin shouldn’t depend on successful ICP, as far as I understand.

Any hints where to look or what to double-check would be very much appreciated.
I think there might be a bug in my code or in the official one in the
Thanks in advance!

Dear @Jis ,
Can you just record the lidar data and check using recorded data in rig json file in point cloud processing? Don’t use CAN + DBC in code to isolate the issue using recorded files to reproduce.

@SivaRamaKrishnaNV I used recorded data and no can - the problem is still exactly the same, so that’s not the reason.

Can you share the recorded data and rig.json file to repro the issue with point cloud processing sample.

@SivaRamaKrishnaNV

The rig.json file:

Here’s the short version (only 1 lidar representing all 4) of the rig file:

{
  "rig": {
    "sensors": [
        {
        "name": "lidar:rear",
        "nominalSensor2Rig": {
          "roll-pitch-yaw": [
            0.0,
            0.0,
            180
          ],
          "t": [
            -1.47,
            0.0,
            0.90
          ]
        },
        "parameter":"file=.../lidar_rear.bin,device=CUSTOM_EX,decoder-path=libplugin_lidar_hesai_<architecture_type>.so,lidar_type=AT128E2X,correction_file=correction_at128.dat",
        "properties": null, "protocol": "lidar.virtual",
        "sensor2Rig": {
          "roll-pitch-yaw": [
            0.0,
            0.0,
            180
          ],
          "t": [
            -1.47,
            0.0,
            0.90
          ]
        }
        }
      ],

    "vehicle": {
      "valid": true,
      "value":{see below},
    "vehicleio": []
  },
  "version": 7
}

Here’s the long version (all 4 lidars):

Long Version of rig with all 4 lidars
{
  "rig": {
    "sensors": [
        {
        "name": "lidar:front:center",
        "nominalSensor2Rig": {
          "roll-pitch-yaw": [
            0.0,
            0.0,
            0.0
          ],
          "t": [
            3.07,
            0.0,
            0.90
          ]},
        "parameter": "file=...,device=CUSTOM_EX,decoder-path=libplugin_lidar_hesai_aarch64.so,lidar_type=AT128E2X,correction_file=correction_at128.dat",
        "properties": null, "protocol": "lidar.virtual",
        "sensor2Rig": {
          "roll-pitch-yaw": [
            0.0,
            0.0,
            0.0
          ],
          "t": [
            3.07,
            0.0,
            0.90
          ]}
        },
        {
        "name": "lidar:front:right",
        "nominalSensor2Rig": {
          "roll-pitch-yaw":  [
            0.0,
            0.0,
            90
          ],
          "t": [
            2.47,
            -0.92,
            0.90
          ]
        },
        "parameter": "file=...,device=CUSTOM_EX,decoder-path=libplugin_lidar_hesai_aarch64.so,lidar_type=AT128E2X,correction_file=correction_at128.dat",
        "properties": null, "protocol": "lidar.virtual",
        "sensor2Rig": {
          "roll-pitch-yaw":  [
            0.0,
            0.0,
            90
          ],
          "t": [
            2.47,
            -0.92,
            0.90
          ]
        }
        },
        {
        "name": "lidar:front:left",
        "nominalSensor2Rig": {
          "roll-pitch-yaw": [
            0.0,
            0.0,
            -90
          ],
          "t": [
            2.47,
            0.92,
            0.90
          ]
        },
        "parameter":"file=...,device=CUSTOM_EX,decoder-path=libplugin_lidar_hesai_aarch64.so,lidar_type=AT128E2X,correction_file=correction_at128.dat",
        "properties": null, "protocol": "lidar.virtual",
        "sensor2Rig": {
          "roll-pitch-yaw": [
            0.0,
            0.0,
            -90
          ],
          "t": [
            2.47,
            0.92,
            0.90
          ]
        }
        },
        {
        "name": "lidar:rear",
        "nominalSensor2Rig": {
          "roll-pitch-yaw": [
            0.0,
            0.0,
            180
          ],
          "t": [
            -1.47,
            0.0,
            0.90
          ]
        },
        "parameter":"file=.../lidar_rear.bin,device=...,decoder-path=...,lidar_type=...,correction_file=...,
        "properties": null, "protocol": "lidar.virtual",
        "sensor2Rig": {
          "roll-pitch-yaw": [
            0.0,
            0.0,
            180
          ],
          "t": [
            -1.47,
            0.0,
            0.90
          ]
        }
        }
      ],

    "vehicle": {
      "valid": true,
      "value":{see below},
    "vehicleio": []
  },
  "version": 7
}

this is what’s inside value of vehicle, if that’s relevant:

Content of vehicle:value:
    "aeroHeight":  ...,
    "aerodynamicDragCoeff":  ...,
    "axlebaseFront":  ...,
    "axlebaseRear":  ...,
    "brakeActuatorTimeConstant":  ...,
    "bumperFront":  ...,
    "bumperRear":  ...,
    "centerOfMassHeight":  ...,
    "centerOfMassToFrontAxle":  ...,
    "centerOfMassToRearAxle":  ...,
    "driveByWireTimeConstant":  ...,
    "driveByWireTimeDelay":  ...,
    "effectiveMass":  ...,
    "frontCorneringStiffness":  ...,
    "frontSteeringOffset":  ...,
    "frontalArea":  ...,
    "height":  ...,
    "inertia3D": [ ...,  ...,  ... ],
    "length": 4.539,
    "mass":  ...,
    "maxEnginePower":  ...,
    "maxSteeringWheelAngle":  ...,
    "rearCorneringStiffness":  ...,
    "rollingResistanceCoeff":  ...,
    "steeringWheelToSteeringMap": [ ... ],
    "throttleActuatorTimeConstant": ...,
    "torqueLUT": {
      "brakePedalInput": ...,
      "throttlePedalInput": ...,
      "throttleSpeedInput":  ...,
      "throttleTorqueOutput": [ ...]
    },
    "wheelRadius": [ ...],
    "wheelbase":  ...,
    "width":  ...,
    "widthWithMirrors":  ...,
    "hasCabin":  ...,
    "actuation": {
      "curvatureTimeConstant":  ...,
      "steeringWheelToSteeringMap": [ ...,  ...,  ...,  ...,  ...,  ...],
      "throttleActuatorTimeDelay":  ...,
      "torqueLUT": {
        "throttleTorqueOutput": ...,
        "throttleSpeedInput": ...,
        "brakeTorqueOutput":  ...,
        "throttlePedalInput":  ...,
        "brakePedalInput":  ...
      },
      "driveByWireNaturalFrequency":  ...,
      "throttleActuatorTimeConstant":  ...,
      "rearWheelAngleTimeDelay":  ...,
      "decelerationTimeDelay":  ...,
      "frontWheelAngleTimeDelay":  ...,
      "driveByWireDampingRatio":  ...,
      "driveByWireTimeDelay":  ...,
      "decelerationTimeConstant":  ...,
      "accelerationTimeDelay":  ...,
      "brakeActuatorTimeDelay":  ...,
      "effectiveMass":  ...,
      "driveByWireTimeConstant":  ...,
      "accelerationTimeConstant":  ...,
      "maxSteeringWheelAngle":  ...,
      "rearWheelAngleTimeConstant":  ...,
      "brakeActuatorTimeConstant":  ...,
      "isDriveByWireSecondOrder":  ...,
      "curvatureTimeDelay":  ...,
      "frontWheelAngleTimeConstant":  ...
    },
    "numTrailers": 0,
     "body": {
      "boundingBoxPosition": [-1.47, 0.0, 0.0],
      "centerOfMass": [ ...,  ...,  ...],
      "height": 20,
      "inertia": [ ..., ..., ...],
      "length": 50,
      "mass":  ...,
      "width": 50,
      "widthWithMirrors":  ...,
      "rearAxleToAPillar":  ...,
      "rearAxleToBPillar":  ...,
      "rearAxleToCPillar":  ...
    },
    "axleRear": {
      "nominalWheelRadiusRight": 0.358,
      "wheelRadiusLeft": 0.358,
      "nominalWheelRadiusLeft": 0.358,
      "track": ...,
      "position": ....,
      "corneringStiffness": ....,
      "wheelRadiusRight": ...
    },
    "axleFront": {
      "nominalWheelRadiusRight":  ...,
      "wheelRadiusLeft":  ...,
      "nominalWheelRadiusLeft":  ...,
      "track":  ...,
      "position":  ...,
      "corneringStiffness":  ...,
      "wheelRadiusRight":  ...
    }
  }

The decoder is from the official hesai lidar plugin for driveworks: Plugin. the <architecture_type> is either x68_64 for the host or arch64 for the Orin.

The recorded data:

I’m not sure what exactly you want to know about it. I recorded it via the recording tool from driveworks. That way I got 2 files for each lidar like:

  • lidar_rear.bin (around 800 MB)
  • lidar_rear.bin.seek (around 530 kB)

same for the other 3 lidars.

Share these files to repro the issue with recorded files.

@SivaRamaKrishnaNV

lidar_front_center.zip (64.2 MB)
lidar_front_left.zip (78.3 MB)
lidar_rear.zip (86.4 MB)
The files of the last lidar (lidar_front_right) is too big. I will try to compress is more later, but maybe those 3 zip files are enough for right now? If you definitely need the last zip file, let me know.
Thanks again for helping!

@SivaRamaKrishnaNV
Quick update with some more findings that (I hope) help to narrow down the issue:

  1. As a cross-check I adapted the sample_icp to my Hesai AT128:
    → I did that to test wether the input of the ICP (m_stitchedDepthMap3D/ m_stitchedDepthMap3DPrev in the pointcloudprocessing-Sample and srcPointCloud / tgtPointCloud in the ICP-Sample) is a zero matrix as well

    • In the ICP-Sample the processing of the pointcloud is done without the RangeImageCreator inbetween
    • fullSpinPointClouds[...] and the derived rangePointClouds[...] from the accumulator (organized depth-map like structure) work, ICP runs (at least there are no errors) and I can visualize the point clouds as expected.
    • As far as I know, the RangeImageCreator is necessery for stuff like:
      • organization of the pointcloud after the stitching
      • clipping (see 2.) → in the ICP-Sampe it is is done via rangePointClouds[...] as part of the fullSpinPointClouds[...]
      • derivation of the m_stitchedDepthImageHost, m_stitchedDepthMap3D and m_stitchedDepthImage

    So my guess it it is not necessery in the ICP-Sample because the result of the accumulator is already organized in a depth-map like way (and the clipping is done done in the mentioned way inside the accumulator)? But when there is stitching, is it necessery to rearrange the pointcloud in an organized way? Is there anything else what the rangeImageCreator does?

  2. To rule out clipping, I now use all of the existing (here very permissive) clipping parameters for theRangeImageCreator, so I added the azimuths and dist Parameters:

    New expended clipping parameters
    const double pi = 3.141592653589793;
    params.clippingParams.minElevationRadians  = -pi / 2.0;
    params.clippingParams.maxElevationRadians  =  pi / 2.0;
    params.clippingParams.minAzimuthRadians    = -pi;
    params.clippingParams.maxAzimuthRadians    =  pi;
    params.clippingParams.nearDist             = 0.01f;
    params.clippingParams.farDist              = 200.0f;
    
    params.clippingParams.orientedBoundingBox.center      = {0.f, 0.f, 0.f};
    params.clippingParams.orientedBoundingBox.rotation    = DW_IDENTITY_MATRIX3F;
    params.clippingParams.orientedBoundingBox.halfAxisXYZ = {1000.f, 1000.f, 1000.f};
    

    Given the debug stats of m_stitchedPoints (ranges mostly between ~0.6 m and ~60 m, sometimes up to ~200 m, full 360° coverage around the vehicle), these settings should not clip everything away.

→ So at this point, the only suspicious block seems to be the dwPointCloudRangeImageCreator

Maybe those new informations are useable for finding the problem?

Dear @Jis ,
Do you see a black image like reported in Could not see range image while executing point cloud processing sample ? Is it related?

@SivaRamaKrishnaNV
It looks the same, yes. But the original sample works, I am able to see the range image there.

Could you please provide any update for this topic?

@ carolyuu

No, I am kind of stuck with what I already wrote.

@SivaRamaKrishnaNV

Do you have any update for this topic?