Any PyTorch versions supporting torch.distributed and nccl backend on jetson orin nano?

Hi,

We give it a try on a device with JetPack 5.1.4 for the script shared in the below comment:

It works correctly and GPU is enabled.
Could you double-check it again?

$ sudo chmod +x build_openMPI.sh 
$ ./build_openMPI.sh 
$ export CUDA_HOME="/usr/local/cuda"
$ export UCX_HOME="/usr/local/ucx"
$ export OMPI_HOME="/usr/local/ompi"
$ export PATH="${CUDA_HOME}/bin:${UCX_HOME}/bin:${OMPI_HOME}/bin:$PATH"
$ export LD_LIBRARY_PATH="${CUDA_HOME}/lib64:${UCX_HOME}/lib64:${OMPI_HOME}/lib64:$LD_LIBRARY_PATH"
$ ompi_info --parsable --all | grep mpi_built_with_cuda_support:value
mca:mpi:base:param:mpi_built_with_cuda_support:value:true

Thanks.