Detecting Objects in Point Clouds with NVIDIA CUDA-Pointpillars

Originally published at:

Use long-range and high-precision data sets to achieve 3D object detection for perception, mapping, and localization algorithms.

Can this run on a live point cloud?
If so do you have recommendations on hardware to generate a live point cloud for this application?

The model uses points cloud as input which is {x,y,z,i} same with the point type from PCL.

Hi Lei Fan, thanks for your work!
By the export to onnx part, I have a question. How can you ensure you only export the middel part of the network(after voxelization and encode to 10 feature per pillar), not the whole part which include voxelization, pillar feature extraction, scatter to bev, backbone, postprocess?

 torch.onnx.export(model,                   # model being run
          (dummy_voxel_features, dummy_voxel_num_points, dummy_coords), # model input (or a tuple for multiple inputs)
          "./pointpillar.onnx",    # where to save the model (can be a file or file-like object)
          export_params=True,        # store the trained parameter weights inside the model file
          opset_version=11,          # the ONNX version to export the model to
          do_constant_folding=True,  # whether to execute constant folding for optimization
          input_names = ['input', 'voxel_num_points', 'coords'],   # the model's input names
          output_names = ['cls_preds', 'box_preds', 'dir_cls_preds'], # the model's output names


We remove pre-process and post-process form pointpillar in OpenPCDet and just keep the middle part, so that we can get the onnx which has no the two parts.
You can check the dir:“tool” which has source code about how to convert trained model into onnx file.

How can I input (x,y,z,i) to model?
float * points = (float*)malloc(cloud.size() * 4 * sizeof(float));
for(size_t i=0;i<cloud.size() ;i++)
*(points + 0 + i * 4) =;
*(points + 1 + i * 4) =;
*(points + 2 + i * 4) =;
*(points + 3 + i * 4) =;
is right?