Train TAO Toolkit PointPillars object detection model without calibration files

I’ve followed successfully the steps on the pointpillars jupyter notebook from TAO Toolkit quick start guide and there are some cells that converts the KITTI dataset (using the calib files) into the desire format to train the model. But according to the documentation for 3D Object Detection PointPillars: “PointPillars dataset does not depend on Camera information and Camera calibration.”, so maybe I understood this wrong?

My goal is to train a model using only pointcloud files with .bin extension and their label files in KITTI format. Is it possible to train a model without calib files as defined in 3D Object Detection PointPillars? Thank you in advance for any help.

While experimenting, I prepared a dataset from KITTI using only velodyne and label files (omitting the changes made using calib files in the notebook) and I was able to convert the dataset as mentioned on the documentation but an error was thrown when starting the training.

Command executed:
tao pointpillars train -e $SPECS_DIR/pointpillars.yaml -r $USER_EXPERIMENT_DIR -k $KEY

Output:
INFO: ****Start logging****
INFO: CUDA_VISIBLE_DEVICES=ALL
INFO: Database filter by min points Car: 14081 => 1
INFO: Database filter by min points Pedestrian: 2272 => 0
INFO: Database filter by min points Cyclist: 837 => 0
INFO: Loading point cloud dataset
INFO: Total samples for point cloud dataset: 3712
/opt/conda/lib/python3.8/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/native/TensorShape.cpp:2156.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
INFO: ****Start training****
epochs: 0%| | 0/80 [00:00<?, ?it/s]
Traceback (most recent call last): | 0/928 [00:00<?, ?it/s]
File “/home/jenkins/agent/workspace/tlt-pytorch-main-nightly/pointcloud/pointpillars/scripts/train.py”, line 152, in
File “/home/jenkins/agent/workspace/tlt-pytorch-main-nightly/pointcloud/pointpillars/scripts/train.py”, line 127, in main
File “/home/jenkins/agent/workspace/tlt-pytorch-main-nightly/pointcloud/pointpillars/tools/train_utils/train_utils.py”, line 93, in train_model
File “/home/jenkins/agent/workspace/tlt-pytorch-main-nightly/pointcloud/pointpillars/tools/train_utils/train_utils.py”, line 24, in train_one_epoch
File “/opt/conda/lib/python3.8/site-packages/torch/utils/data/dataloader.py”, line 521, in __next__
data = self._next_data()
File “/opt/conda/lib/python3.8/site-packages/torch/utils/data/dataloader.py”, line 1203, in _next_data
return self._process_data(data)
File “/opt/conda/lib/python3.8/site-packages/torch/utils/data/dataloader.py”, line 1229, in _process_data
data.reraise()
File “/opt/conda/lib/python3.8/site-packages/torch/_utils.py”, line 438, in reraise
raise exception
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
File “/opt/conda/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py”, line 287, in _worker_loop
data = fetcher.fetch(index)
File “/opt/conda/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py”, line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File “/opt/conda/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py”, line 49, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File “/home/jenkins/agent/workspace/tlt-pytorch-main-nightly/pointcloud/pointpillars/pcdet/datasets/general/pc_dataset.py”, line 317, in __getitem__
File “/home/jenkins/agent/workspace/tlt-pytorch-main-nightly/pointcloud/pointpillars/pcdet/datasets/dataset.py”, line 134, in prepare_data
File “/home/jenkins/agent/workspace/tlt-pytorch-main-nightly/pointcloud/pointpillars/pcdet/datasets/augmentor/data_augmentor.py”, line 104, in forward
File “/home/jenkins/agent/workspace/tlt-pytorch-main-nightly/pointcloud/pointpillars/pcdet/datasets/augmentor/database_sampler.py”, line 190, in __call__
File “<__array_function__ internals>”, line 5, in stack
File “/opt/conda/lib/python3.8/site-packages/numpy/core/shape_base.py”, line 422, in stack
raise ValueError(‘need at least one array to stack’)
ValueError: need at least one array to stack

2022-08-05 07:42:15,100 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.

If calibration files are missing, it is not compatible to run the gen_lidar_labels.py and gen_lidar_points.py mentioned in the notebook anymore.
Above error “ValueError: need at least one array to stack” should be related to 3D bounding box which can be represented by (x, y, z, dx, dy, dz, yaw). Please make sure it is available in the .pkl files.

Yeah, exactly, I didn’t run the gen_lidar_labels.py and gen_lidar_points.py in order to try to simulate the training with a dataset missing calib files. Is it possible to do this?
I read “PointPillars dataset does not depend on Camera information and Camera calibration.” and I thought there would be a way to train PointPillars without those files. My goal is to create a dataset but I don’t have the calibration files. Thank you.

This might not be working since the calibration files are used for filtering the point cloud not in fov.

Update:
It is possible to train TAO pointpillars without calibration files.

Above comment “the calibration files are used for filtering the point cloud not in fov.” is only for KITTI dataset.

My goal is to train PointPillars without calibration files, since the lidar sensor we’re going to use to generate the dataset doesn’t provide those files.

So if not KITTI dataset for what I understand, what dataset structure should I take as example? Do you have a guide to train TAO PointPillars without calibration files? Thank you.

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Refer to PointPillars — TAO Toolkit 3.22.05 documentation

No, unfortunately it is not available.
Suggest you to follow above guide and also pointpillars section. 3D Object Detection — TAO Toolkit 3.22.05 documentation

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.