Train/Infer Sparse Models on TX2?

I’m trying to infer a sparse model on TX2 https://github.com/NVIDIA/MinkowskiEngine, but it fails and causes the error discussed on https://forums.developer.nvidia.com/t/tx2-gpu-obsolete-for-pytorch/158330?u=sathya. Any help or guidance?

Hi,

Please check this comment to add the TX2 GPU architecture and recompile the library.

Thanks.