I am trying to convert PyTorch model to TensorRT via ONNX.
I am converting the ‘GridSampler’ function, I am trying to solve the problem by approaching it in two ways, and I have a question about each case.
The first is for ATen operator support.
I defined grid_sampler in ONNX symbolic_opset10.py and returned ‘at::grid_sampler’. After the ONNX model was created normally, when building the engine in TensorRT, the error ‘UNSUPPORTED_NODE’ appeared.
ERROR: Failed to parse the ONNX file. In node 28 (parseGraph): UNSUPPORTED_NODE: No importer registered for op: grid_sampler
Second, I considered ATen ops as custom ops and tried to proceed with plugin. When defining the grid_sampler function, the name of the graph to be returned was ‘GridSampler’, but the export from the ONNX parser failed. Perhaps this is an ONNX parser issue, but since it’s for converting to TensorRT, I’ve suggested a question here, and if you have a similar case, please advise.
RuntimeError: No Op registered for GridSampler with domain_version of 10
TensorRT Version: 22.214.171.124
GPU Type: Quadro P5000
Nvidia Driver Version: 440.64
CUDA Version: 10.2
CUDNN Version: 7.6.5
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3.7.3
PyTorch Version (if applicable): 1.5.0