DRIVE OS Version:
- Drive OS 6.0.10, DriveWorks 5.20.24
- Hardware: Drive Orin P3710
Environment/Setup:
- Base sample:
sample_pointcloudprocessing - Sensors:
- 4× Hesai AT128 LiDARs via custom
lidar.customplugin - CAN-based egomotion (VW DBC, live CAN via
can.socket)
- 4× Hesai AT128 LiDARs via custom
- Rig: custom JSON with 4 LiDARs around the vehicle (approx. 360° coverage around origin), egomotion frame configured
The original, unmodified sample_pointcloudprocessing works on my system: I can see both the fused point cloud and the range image (bottom-right “Rendered range image” tile) as expected. It looks like all points are being rejected or mapped to zero depth inside the range image creator, before any GL/visualization step.
Short Version / Problem Description:
I modified the sample_pointcloudprocessing so I can use it with my setup (see “What I changed”). Everything works so far (see “What works”) except the range image. It stays black. The content of the relevant variables (m_stitchedDepthImageHost, m_stitchedDepthMap3D and m_stitchedDepthImage) are all finite, but only with '0’s, so the range is supposedly 0 for all points (see “What does not work”). As I said, the rest works - so m_stitchedPoints is filled with rational values (calcutlates ranges of 3D-points between 0.5 m and 200 m). The clipping shouldn’t be the problem (see “Things I’ve tried so far”). I don’t know what else could be the reason nor what to do/try next.
Details:
What I changed
Compared to the original sample, I tried to keep changes minimal:
- Use live LiDAR data from 4 Hesai AT128 sensors instead of recorded data
- Use a custom rig (4 LiDARs around the car, approx. 360° coverage) instead of the original rig
- Use CAN + custom DBC for egomotion
- Point cloud fusion / stitching is still done the same way as in the sample (result in
m_stitchedPoints) - All range image related code (initialization, params, binding, process, rendering) is identical in structure to the sample
Example (unchanged pattern):
dwPointCloudRangeImageCreatorParams params{};
CHECK_DW_ERROR(dwPCRangeImageCreator_getDefaultParams(¶ms));
// I only adjusted clipping later, but I also tested with pure defaults.
// Later:
CHECK_DW_ERROR(dwPCRangeImageCreator_bindInput(&m_stitchedPoints, m_rangeImageCreator));
CHECK_DW_ERROR(dwPCRangeImageCreator_bindPointCloudOutput(&m_stitchedDepthMap3D, m_rangeImageCreator));
CHECK_DW_ERROR(dwPCRangeImageCreator_bindOutput(m_stitchedDepthImage, m_rangeImageCreator));
// In processing:
CHECK_DW_ERROR(dwPCRangeImageCreator_process(m_rangeImageCreator));
What works
- All 4 Hesai LiDARs are running and decoding via the custom plugin
- The stitched / fused point cloud (
m_stitchedPoints) is rendered and looks plausible:- ~400k points per spin
- approx. 360° coverage around the vehicle
- Radii and heights are in a reasonable range (e.g. minR ~0.6 m, maxR ~50–200 m, z in approx [-1.3 m, 10 m]).
- The RenderEngine + ImageStreamerGL setup is fine:
- If I replace the range-image-based drawing with a simple gradient test in the RGBA image, I see a left-to-right black → white gradient in the “Rendered range image” tile.
→ So the OpenGL pipeline and the GUI rendering are OK.
- If I replace the range-image-based drawing with a simple gradient test in the RGBA image, I see a left-to-right black → white gradient in the “Rendered range image” tile.
What does *not* work
- The bottom-right range image tile always stays completely black. (There is no DriveWorks error from any of the
dwPCRangeImageCreator_*calls or any other functions/callings)
I added some debug prints around thedwPointCloudRangeImageCreator outputs, so I know that:
- the depth-map point cloud buffer
m_stitchedDepthMap3Dhas the correct size (given width*height), but all points are(0,0,0)with range0. - same with the CUDA depth image (
m_stitchedDepthImage) and the CPU-copy of it (m_stitchedDepthImageHost)DepthMap3D: size=524288 P[0..9] = (0,0,0) r=0 DepthMap3D: finite(first 10)=10 minR=0 maxR=0 # --> the first 10 points are finite with a range of 0 CUDA DepthImage: w=4096 h=128 finite=524288 min=0 max=0 Range image: valid (>0) pixels = 0 # --> all pixels are finite, but 0 - But as I already said, the stitched input cloud stats (directly from m_stitchedPoints) looks good:
DEBUG stitched stats: size=395902 finite=395902 minR=1.57123 maxR=51.4837 minZ=-1.33423 maxZ=9.8342 # Similar ranges for other spins; the fused cloud rendering looks good.
Things I've tried so far
-
as I wrote in “What does not work”, I added some debug prints
-
as I wrote in “What does work”, I replaced the range-image-based drawing with a simple gradient test to see if the RenderEngine + ImageStreamerGL setup if fine → that Test worked, so the OpenGL pipeline and the GUI rendering are OK
-
I tried explicitly “opening up” clipping:
Details of the opening
params.clippingParams.minElevationRadians = -pi / 2.0; params.clippingParams.maxElevationRadians = pi / 2.0; params.clippingParams.orientedBoundingBox.center = {0.f, 0.f, 0.f}; params.clippingParams.orientedBoundingBox.rotation = DW_IDENTITY_MATRIX3F; params.clippingParams.orientedBoundingBox.halfAxisXYZ = {1000.f, 1000.f, 1000.f};
Even with this “huge OBB” and ±90° elevation, I still get:
DepthMap3Dall zeros- CUDA depth image all zeros
- Range image tile black
So it looks like all points are being rejected or mapped to zero depth inside the range image creator, before any GL/visualization step.
Extra content:
- Right know the CAN-data (for egomotion) is hard coded at some point, I don*t use the CAN signals yet.
- And also CAN warnings like sensor lagging and large
DW_SENSOR_STATE_DELTA_HOST_AND_SENSOR_TIME(~1.7e15 us). - However, ICP and egomotion should mainly affect the stitched cloud / ego compensation between spins. The stitched cloud itself looks fine, and the range image for a single spin shouldn’t depend on successful ICP, as far as I understand.
Any hints where to look or what to double-check would be very much appreciated.
I think there might be a bug in my code or in the official one in the
Thanks in advance!