SPI_IOC_MESSAGE ioctl for SPI extremely slow

I’m trying to command a stepper motor controller through SPI, and I found that I could not send commands as quickly as I was expecting. After some profiling, I narrowed it down to this call:

clock_gettime(CLOCK_MONOTONIC, &ts0); // Check time before
ioctl(fd, SPI_IOC_MESSAGE(1), xfer); // spi_ioc_transfer structure
clock_gettime(CLOCK_MONOTONIC, &ts1); // Check time after

The message I’m trying to send is 6 bytes, and the SPI clock rate is 5MHz. So this should take about 10 microseconds plus a little system call overhead.

Instead, I’m finding that this call usually takes a 8 to 22 milliseconds, usually on the higher end.

For reference, here’s how I do initialization:

static int configure_spi(int fd)
    char mode = SPI_MODE_3;
    char bits = 8;
    int speed = 5000000;
    char lsbfirst = 0;
    check_ioctl(fd, SPI_IOC_WR_MODE, &mode);
    check_ioctl(fd, SPI_IOC_WR_BITS_PER_WORD, &bits);
    check_ioctl(fd, SPI_IOC_WR_MAX_SPEED_HZ, &speed);
    check_ioctl(fd, SPI_IOC_WR_LSB_FIRST, &lsbfirst);
    return 0;

And here’s the setup for a transaction:

    struct spi_ioc_transfer xfer[1];
    memset(xfer, 0, sizeof(xfer));

    xfer[0].tx_buf = (unsigned long) tbuf;
    xfer[0].rx_buf = (unsigned long) rbuf;
    xfer[0].len           = len;
    xfer[0].speed_hz      = 5000000;
    xfer[0].bits_per_word = 8;

    ioctl(fd, SPI_IOC_MESSAGE(1), xfer);

Can anyone tell me how I might go about figuring out what’s wrong and how to fix it? Take 22 milliseconds for a single SPI transaction is completely intolerable for my application, so I have to fix this.


Hi theosib,

Are you using the devkit or custom board for Orin Nano?
What’s your Jetpack version in use?

Is your issue about the delay issue in SPI driver?

Please share the full dmesg for further check.

I’m using the devkit. The jetpack version is 5.1.1-b56. The delay does seem to be related to the SPI driver since it’s just the ioctl for doing the SPI transaction that takes excessive time. The dmesg is attached.

The way you word this makes it sound like a known issue. Is it?

dmesg.txt (84.5 KB)

I thought you are using JP6 and we know there should be known issue in the spi-tegra114 driver.

Could you update to the latest Jetpack 5.1.3 (L4T R35.5.0) to check if there still the issue?

I installed Jetpack 5.1.3, and this seems to have broken SPI entirely, so I really need some help with this since this is a show-stopper for me.

The ioctls are still happening, and the performance is still terrible.

But now no SPI communication seems to be happening anymore. The stepper motor driver chips I’m trying to talk to are no longer doing anything, and they’re not drawing any current.


I just realized that I might have to run jetson-io.py to configure the I/O pins. However, when I try to run that, the curses screen flashes for only a moment, and then I’m returned to the shell. It seems that 5.1.3 broke this utility.

I captured stdout from jetson-io.py, and this is what I get:
FATAL ERROR! No APP_b partition found

I’ve tried googling this, but I’m not finding any answers.

So, I figured that the problem might be that I had corrupted the boot flash, so I made a fresh one. When I went to run jetson-io.py, I got the exact same error:
No APP_b partition found!

I finally found a discussion about the problem with running jetson-io.py: X264 and TensorRT sudden reboot (MJPG encoder not affected, but not fast enough) on Jetson Orin Nano - #54 by timf34

But it doesn’t seem like this has ever been resolved. It looks like I have no choice but to downgrade back to 5.1.1.

I found the bug in jetson-io.py and implemented a workaround.

In the Jetson subdirectory, there’s another python source called board.py.

This bit of code here incorrectly computes the partition name to look for:

        #Finding the active partition in case of redundant rootfs flash.
        activepart = syscall.call_out('nvbootctrl get-current-slot')
        if activepart[0] == '0':
            mountpart = "APP"
            mountpart = "APP_b"

I hacked it to set mountpart to “APP”. That worked, and everything’s going again.

This QA snafu cost me a lot of time. You might want to write an app note somewhere to spare others this frustration.

1 Like