Description
Hi All,
My model has some FFT and IFFT operations in between of my deep learning model. I would like to convert the model to Tensorrt for high throughput inference. But i have problem :
torch.onnx.errors.UnsupportedOperatorError: Exporting the operator ‘aten::fft_fft2’ to ONNX opset version 17 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub: GitHub · Where software is built.
my system: torch 2.7.0,tensorrt 10.3 in jetson agx orin and jetpack 6.2.
Only to create a plugin? This is difficult for me :(
Any help is appreciated. Thank You!
Environment
TensorRT Version: 10.3
GPU Type: jetson orin
Nvidia Driver Version:
CUDA Version: 12.6
CUDNN Version:
Operating System + Version:
Python Version (if applicable): 3.10
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 2.7
Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered
I understand your concern about converting your PyTorch model to TensorRT for high-throughput inference on the Jetson AGX Orin. The error you’re encountering, torch.onnx.errors.UnsupportedOperatorError, indicates that the aten::fft_fft2 operator is not supported in the ONNX opset version 17.
To resolve this issue, you have a few options:
- Create a custom plugin: As you mentioned, creating a custom plugin is one possible solution. This would involve implementing a custom operator in TensorRT that supports the FFT and IFFT operations. However, as you noted, this can be a challenging task, especially if you’re not familiar with the TensorRT plugin architecture.
- Use a different ONNX opset version: You can try exporting your model to a different ONNX opset version that supports the
aten::fft_fft2 operator. However, this might not be possible, as the latest ONNX opset version (17) does not support this operator.
- Use a different model conversion tool: Instead of using PyTorch’s built-in ONNX exporter, you can try using other model conversion tools like
torch2trt or onnx-tensorrt. These tools might provide better support for converting PyTorch models to TensorRT.
- Modify your model: If possible, you can try modifying your PyTorch model to use a different implementation of the FFT and IFFT operations that are supported by ONNX. For example, you can use the
torch.fft module instead of the aten::fft_fft2 operator.
- Use TensorRT’s built-in FFT operator: TensorRT provides a built-in FFT operator that can be used for inference. You can try modifying your model to use this operator instead of the PyTorch FFT operator.