Hi there, I’m working on Jetson Orin NX and PyTorch, I followed instructions here (Installing PyTorch for Jetson Platform - NVIDIA Docs) to install PyTorch and found that “torch.distributed.is_available()” is “False”. If I directly install PyTorch using conda (conda install pytorch -c pytorch -c nvidia) I will have “torch.distributed.is_available()” to be “True” but “torch.cuda.is_available()” to be “False”.
Unfortunately I have to use both CUDA and distributed module. I’m trying to deploy a mmpose (GitHub - open-mmlab/mmpose: OpenMMLab Pose Estimation Toolbox and Benchmark.) module and its dependencies (mmengine, mmdeploy, mmcv) are largely depend on both CUDA and torch.distributed.
How can I fix this problem? Many thanks.
Could you try the container mentioned below:
@zergzzlun if you prefer installing the wheel as opposed to using the container, I believe this was the last PyTorch wheel built with USE_DISTRIBUTED enabled:
- JetPack 5.0 (L4T R34.1) / JetPack 5.0.2 (L4T R35.1) / JetPack 5.1 (L4T R35.2.1) / JetPack 5.1.1 (L4T R35.3.1)
For newer versions, you would need to build PyTorch from source to enable distributed mode (see this post for build instructions: PyTorch for Jetson)
Also, the reason that you get
torch.cuda.is_available = False when you install it with conda, is because those upstream wheels from PyPi/conda/pip were not built for JetPack or with CUDA enabled.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.