I’m trying to infer a sparse model on TX2 https://github.com/NVIDIA/MinkowskiEngine, but it fails and causes the error discussed on https://forums.developer.nvidia.com/t/tx2-gpu-obsolete-for-pytorch/158330?u=sathya. Any help or guidance?
I’m trying to infer a sparse model on TX2 https://github.com/NVIDIA/MinkowskiEngine, but it fails and causes the error discussed on https://forums.developer.nvidia.com/t/tx2-gpu-obsolete-for-pytorch/158330?u=sathya. Any help or guidance?