You can see sometimes the time up to 2000us. Then I try spi_async and found it’s performance is worse.
Cause I can read data from ADC when the ADC give a ready signal to me, and that signal will trigger a interrupt.I need finish spi data transfer between the 2 interrupts interval, for example, if I use 4kHz sample rate, I should finish the data read less than 250us, otherwise I will lose data.
I check the hardware time cost(CS low to high), it cost less than 50us, so I think the time cost may realted to the SPI controller driver.
Can I reduce the time cost when I just use the system api (spi_sync, spi_async).
How the SPI work with DMA? Can DMA control the SPI transfer directly?That is to say, the interrupt trigger the DMA, DMA read data use SPI, and when the data is enough, DMA trigger a callback to let CPU handle the data. If I want realize the idea, should I change the NVIDIA spi controller driver?
Can I reduce the time cost when I just use the system api (spi_sync, spi_async).
Yes the time cost can be reduced.
How the SPI work with DMA? Can DMA control the SPI transfer directly?That is to say, the interrupt trigger the DMA, DMA read data use SPI, and when the data is enough, DMA trigger a callback to let CPU handle the data. If I want realize the idea, should I change the NVIDIA spi controller driver?
SPI DMA gets triggered when the BS is greater than 256B and it will control the transfer. If you wish to use PIO mode you need to limit to BS of 256B. If your planning to use just the PIO I can provide you a change which would disable the DMA and just run the transfers using PIO.
Thank you, Shane, I get it, so just modify the kernel driver, as you say, reduce SPI_FIFO_DEPTH, this behavior will realize use PIO instead of DMA? If so, I will try it.