PyTorch Distributed

Is there any way to get the distributed PyTorch running or do I need to build it from source? I installed the latest version from the documentation at Installing PyTorch for Jetson Platform - NVIDIA Docs but now I get

>>> import torch
>>> torch.distributed.is_available()
False
>>>

When I try to install PyTorch from the torch website then CUDA becomes unavailable.

Also in that documentation the whl file download URL gives a 404.

1 Like

IIRC the last pre-built PyTorch wheel for JetPack with USE_DISTRIBUTED was PyTorch 1.11:

PyTorch v1.11.0

For newer PyTorch versions that that, you would need to build it from source (instructions are in this source)

Ah that sucks. I’ll try to build the latest version for CUDA 12.1 with all the distributed stuff enabled then.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.