Hello NVIDIA Researchers and Developers,
My name is Susmitha Bollimuntha, currently working on autonomous vehicle perception projects. I am trying to reproduce results from the paper:
“NVRadarNet: Real-Time Radar Obstacle and Free Space Detection for Autonomous Driving” (Popov et al., NVIDIA).
Specifically, I am focusing on the free-space detection part. I have tried to follow the methods described in the paper, but my implementation does not produce the expected results.
What I tried:
- Implemented preprocessing for radar inputs
- Built a DNN based on the paper’s description
- Tested on Mini and Trainval
Problem:
- i am getting 77% accuracy where paper getting 89% accuracy….
Resources:
My questions:
- Are there implementation details for free-space detection that are not obvious from the paper?
- Is there a reference implementation or snippet available?
- Any advice on radar input formatting and training setup?
Any guidance would be deeply appreciated. Thank you for your time and for the great work with NVRadarNet.
Best,
Susmitha Bollimuntha
Hi @susmithavijay6666 , I’m happy to help you with this case.
Do you have an ONNX model for the DNN that you’ve created ? Also, it would be useful if you could share your system configurations and hardware details.
Thank you
Hi,
Thank you for your response and for offering to help.
Below are my system and hardware details:
Hardware:
- GPU: NVIDIA A100 80GB PCIe (4 GPUs)
- CPU: Intel Xeon Gold 6346 @ 3.10 GHz
- RAM: 251 GB
Software:
- OS: Ubuntu 20.04.6 LTS
- NVIDIA Driver: 535.247.01
- CUDA Toolkit: 12.2
- PyTorch: 2.4.1 + CUDA 12.1
- Python: 3.8.10
- cuDNN: 9.1.0 (enabled via PyTorch)
Regarding the ONNX model: I have not exported the network to ONNX yet. I can prepare and share the ONNX model once you confirm what aspects you would like to inspect (architecture, output heads, or post-processing).
Please let me know how you would like to proceed.
Thank you
Hi @susmithavijay6666, Thanks for sharing the details.
It is highly unlikely that the lower accuracy (77% vs 89%) is related to TensorRT.
Make sure your model implementations are fine.
In the NVRadar Paper, it mentions :
Next, we perform ego-motion compensation
for the accumulated detections to the latest known vehicle position. We propagate the older points using the known egomotion of our vehicle to estimate where they will be at the time of the DNN inference (current time)
Please verify that you are correctly handling this ego-motion compensation.
Even if you share the ONNX model, we cannot fix accuracy discrepancies on the TensorRT side, as the .engine conversion only accelerates the existing model logic. I recommend double-checking your input generation pipeline and loss function weights against the paper’s methodology first.
Thank you.