3D Part and Scene Segmentation with Point-Voxel CNN on NVIDIA Jetson

In our NeurIPS’19 paper [1], we propose Point-Voxel CNN (PVCNN), an efficient 3D deep learning method for various 3D vision applications. Here we show the 3D object segmentation demo which runs at 20 FPS on Jetson Nano. Note that the most efficient previous model, PointNet, runs at only 8 FPS. We also show the performance of 3D indoor scene segmentation with our PVCNN and PointNet on Jetson AGX Xavier. Remarkably, our network takes just 2.7 seconds to process more than one million points, while the PointNet takes more than 4.1 seconds and achieves around 9% worse mIoU comparing with our method.

Here is a recorded video demo (inference is done in Jetson Nano, and the video is rendered on MacBook):

The link to our project page, paper, and code are as follows for your kind reference:
Project page: https://pvcnn.mit.edu/
Paper: http://papers.nips.cc/paper/8382-point-voxel-cnn-for-efficient-3d-deep-learning.pdf
Code: GitHub - mit-han-lab/pvcnn: [NeurIPS 2019, Spotlight] Point-Voxel CNN for Efficient 3D Deep Learning

[1] Zhijian Liu, Haotian Tang, Yujun Lin, Song Han, Point-Voxel CNN for Efficient 3D Deep Learning, Conference on Neural Information Processing Systems, 2019.

1 Like

Great work, very exciting! Thanks for sharing - we enjoy following your lab’s research since the Temporal Shift Module publication.

Excellent efficiency. I would like to run the code on my jetson nano 4gb too, I wonder whether the project you shared on github (GitHub - mit-han-lab/pvcnn: [NeurIPS 2019, Spotlight] Point-Voxel CNN for Efficient 3D Deep Learning) is exactly the code you run on your jetson nano (ARM)?

My code stuck in “importmodule” when I run “python train.py configs/s3dis/pvcnn/area5.py --devices 0,1 --evaluate --configs.evaluate.best_checkpoint_path s3dis.pvcnn.area5.c1.pth.tar” on my jetson nano.

1 Like