Using Nvidia VPI FFT in pytorch tensor

I’m going to use NVIDIA’s VPI(Vision Programming Interface) for the acceleration of FFT&IFFT in the Jetson Xavier NZ module.

But it seems that FFT in VPI module only supports ‘VPI.Image’ format input.
Is it possible to do FFT operation of VPI library with a pytorch embedding tensor (which has a larger dimension than 3), not an image?
(e.g. pytorch tensor of shape (30, 30, 256), which is H, W, C)

If not, is there a way to do FFT faster even if it’s not a VPI library in Jetson Nano?

Hi,

Suppose the PyTorch tensor is a continuous CUDA buffer.
You can wrap a VPI Image from GPU memory with vpiImageCreateWrapper().
https://docs.nvidia.com/vpi/group__VPI__Image.html#ga3e7cf2520dd568a7e7a9a6876ea7995c

We also have the FFT/IFFT API in our CUDA library.
You can find some samples in the below folder:

/usr/local/cuda-11.4/samples/7_CUDALibraries/simpleCUFFT

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.