Jetpack with cuda-aware OMPI could be default

I am using JP 4.6 on a few Xaviers NX that make a small cluster and noticed the current OpenMPI installed is not cuda-aware capable, thus needing recompilation.
Maybe it could be considered for future JetPack releases?

======== UPDATE ========

I was trying to compile OpenMPI with cuda-aware support by following the documentation: OpenMPI Build CUDA, however, GDRCopy is not meant to be used with Tegra according to this post: Github GDRCopy.

Then Mat Colgrove mentions that UCX is not necessary for cuda-aware OMPI to work, see this SO post.

You can see in OMPI doc that in order to build UCX, it already has to point to GDRCopy when configuring (./configure --prefix=/path/to/ucx-cuda-install --with-cuda=/usr/local/cuda --with-gdrcopy=/usr). Since it will not compile on Tegra, I assume it can be omitted.

When recompiling OpenMPI, should I stick to the default 2.1 version (flagged as retired in OpenMPI page) that comes with Ubuntu 18.04 in JetPack 4.6 or it is ok to go to 4.1? If you have any suggestion, feel free to comment.

======= UPDATE 2 ========

I managed to compile ucx version 1.11 (1.6 as suggested by the above link is a no-go) and then OpenMPI 2.1.1 from the tarballs, both with cuda support.
When compiling and running the compile-time and run-time checker program from cuda-aware support, it outputs that it is cuda-aware for compile-time, but not for run-time.

Checking the mpi-ext.h header that it needs (which was installed in another directory by the OMPI compilation, so I had to fix some symlinks for mpicc to find it), it seems to be the macro MPIX_CUDA_AWARE_SUPPORT defined with value 1 in the file mpiext_cuda_c.h that the program checks for compile-time (the Jetpack 4.6 factory version has value 0), but the function MPIX_Query_cuda_support() doesn’t return 1, thus failing for run-time cuda-awareness (which I believe is what is needed).

If anyone had luck with cuda-awareness with Tegra, let me know.


Just checked the OpenMPI’s document.
It seems most of the document is for dGPU.

Would you mind double-checking with the OpenMPI team to see if they support the integrated GPU first?


I have just posted the question on OMPI github page and will update here as soon as they reply there.
It could very well be the case, just like with gdrcopy.

======= QUICK UPDATE (11/01/2022) =======

One of OpenMPI’s contributor’s included Tommy Janjusic in the conversation, who seems to be a NVidia programmer working on the lib, so I am just waiting for him to step in and provide some insight. Here.


Thanks for checking this with the OpenMPI team.

We are going to compile the library on Jetson to see if any quick fix for the issue.
Will share more information with you later.



We can build OpenMPI+CUDA on JetPack 4.6 without issues.
Below is our building steps for your reference:

1. Set environment

$ export CUDA_HOME="/usr/local/cuda"
$ export UCX_HOME="/usr/local/ucx"
$ export OMPI_HOME="/usr/local/ompi"
$ export PATH="${CUDA_HOME}/bin:$PATH}"
$ export PATH="{UCX_HOME}/bin:$PATH}"
$ export PATH="{OMPI_HOME}/bin:$PATH}"

2. Install UCX

$ git clone
$ cd ucx/
$ cd ucx/
$ git clean -xfd
$ ./
$ mkdir build
$ cd build
$ ../configure --prefix=$UCX_HOME --enable-debug --with-cuda=$CUDA_HOME --enable-mt --disable-cma
$ make
$ sudo make install

3. Install MPI

$ git clone
$ cd ompi/
$ git submodule update --init --recursive
$ sudo apt-get install -y pandoc
$ ./
$ mkdir build
$ cd build
$ ./configure --with-cuda=$CUDA_HOME --with-ucx=$UCX_HOME
$ make
$ sudo make install

4. Verified

$ ompi_info -a | grep "\-with\-cuda"
Configure command line: '--with-cuda=/usr/local/cuda' '--with-ucx=/usr/local/ucx'


AastaLLL, first of all, thanks for providing this step-by-step.
I did try it and, with some patience, the thing compiled and installed on a Xavier NX. It does, however, require the explicit use of mpic++.openmpi and mpiexec.openmpi to compile/run, otherwise the plain mpic++/mpiexec with not find libs and complain about unresolved symbols.

When you compile/run the test prog below, what does it say for you?

#include <stdio.h>
#include "mpi.h"
#include "mpi-ext.h" /* Needed for CUDA-aware check */
int main(int argc, char *argv[])
    printf("Compile time check:\n");
    printf("This MPI library has CUDA-aware support.\n");
    printf("This MPI library does not have CUDA-aware support.\n");
    printf("This MPI library cannot determine if there is CUDA-aware support.\n");
    printf("Run time check:\n");
    if (1 == MPIX_Query_cuda_support()) {
        printf("This MPI library has CUDA-aware support.\n");
    } else {
        printf("This MPI library does not have CUDA-aware support.\n");
#else /* !defined(MPIX_CUDA_AWARE_SUPPORT) */
    printf("This MPI library cannot determine if there is CUDA-aware support.\n");
    return 0;

If it says that it is not CUDA-aware for compile/run-time, then it can be that these “mpi.h” and “mpi-ext.h” are the wrong ones.


We can get the compiling time CUDA support but somehow MPIX_Query_cuda_support() returns false.
Let us check this further. Will share more information with you later.

$ mpic++ test.cpp -o test
$ ./test
Compile time check:
This MPI library has CUDA-aware support.
Run time check:
This MPI library does not have CUDA-aware support.


@AastaLLL, thanks for your time and patience looking into all of this.
From the OMPI discussions, it seems that this function only really queries in run-time if OMPI was build with cuda, it isn’t really testing the functionality. For your own reference, see this thread.

I am integrating both dGPUs and Tegras in my OMPI project and hoping to use the same cuda-aware code for host-device-host data copies. Let me do this, so I can accept your answer and we close this: I will write a minimal program to MPI send some data from one Tegra device to another Tegra device, and see if it is actually working despite what MPIX_Query_cuda_support() says.
I will update as soon as I have it tested.

@AastaLLL, I compiled and installed on my jetsons the OpenMPI/UCX as you described, then I wrote a small program to test the cuda-awareness by copying contents from the device in mpi_rank 0 to the device in mpi_rank 1. It won’t work, MPI complains about a bad address, which is solved when I restrict the copy from host memory to host memory. Please see below and I hope it serves for other people to try in their Tegra clusters:

#include <cstdio>
#include <mpi.h>

__global__ void print_val(float *data, const int LEN);

int main(int argc, char **argv)
	const int	LENGTH		= 32;
	int			mpi_rank	= 0,
				mpi_size	= 0;
	float		host_data[LENGTH],

	MPI_Init(&argc, &argv);

	MPI_Comm_rank(MPI_COMM_WORLD, &mpi_rank);
	MPI_Comm_size(MPI_COMM_WORLD, &mpi_size);

	if(mpi_rank == 0)
		for(int i = 0; i < LENGTH; i++)
			host_data[i] = (float) i * 0.5f;

	cudaMalloc((void **) &dev_data, LENGTH * sizeof(float));
	cudaMemset(&dev_data, LENGTH * sizeof(float), 0);

	if(mpi_rank == 0)
		cudaMemcpy(dev_data, host_data, LENGTH * sizeof(float), cudaMemcpyHostToDevice);
		MPI_Send(dev_data, LENGTH, MPI_FLOAT, 1, 0, MPI_COMM_WORLD);

	if(mpi_rank == 1)
		//cudaMemcpy(dev_data, host_data, LENGTH * sizeof(float), cudaMemcpyHostToDevice); // uncomment if receiving in host_data
		print_val <<< 1, 1 >>> (dev_data, LENGTH);


	return 0;

__global__ void print_val(float *data, const int LEN)
	printf("%.5f\n", data[LEN - 1]);

I compiled it with the following lines:

nvcc -Xcompiler -Wall -c -o cuda_aware.o -I/usr/lib/aarch64-linux-gnu/openmpi/include
mpic++.openmpi -o cuda_aware cuda_aware.o -I/usr/local/cuda-10.2/include -L/usr/local/cuda-10.2/lib64 -lcudart

Then I run with:

mpiexec.openmpi --hostfile ~/MPI_Nodes.txt --map-by ppr:1:node --mca btl_tcp_if_include ./cuda_aware

MPI_Nodes.txt is my configuration file for MPI and it has the nodes of the cluster, and my ssh environment is already configured so the process will fire on the remote node without issues.

Notice that the nodes will have an array of floats, with rank 0 initializing it to some values and then all nodes will allocate space in the device. Rank 0 copies this initialized array to its device memory and tries to send it ro rank 1 device memory. If you want to receive in host memory, then uncomment the copy from host to device in rank 1 (but it won’t work either, because the bad address is when copying from the device memory in rank 0). In the end, rank 1 should print the last element from its device memory.

If you have a couple of jetsons ready to use in MPI, try all combinations you want, it will only work when copying from host memory to host memory (that is, no cuda-awareness).

Another comment I want to make, this time for the JetPack maintainers, is that in 4.6 you won’t be able to run any CUDA program unless it is done from docker. cuda-memcheck will say that all devices are busy or unavailable, and I could only fix this after reading this NV forums thread. I agree it should be fixed in next JP releases, just as it was in previous releases.

Let me know what you think.


Thanks for sharing this information.
We are going to set up another Jetson to see the result from our side.

For the cuda-memcheck issue, this is a limitation in GPU profiling due to some internal security issues.
So I don’t think there will be a fix in the upcoming release.
Please run the cuda-memcheck with root authority to get the output.