How to convert 'GridSampler' function from ONNX to TensorRT

Description

I am trying to convert PyTorch model to TensorRT via ONNX.
I am converting the ‘GridSampler’ function, I am trying to solve the problem by approaching it in two ways, and I have a question about each case.

The first is for ATen operator support.
I defined grid_sampler in ONNX symbolic_opset10.py and returned ‘at::grid_sampler’. After the ONNX model was created normally, when building the engine in TensorRT, the error ‘UNSUPPORTED_NODE’ appeared.

ERROR: Failed to parse the ONNX file.
In node 28 (parseGraph): UNSUPPORTED_NODE: No importer registered for op: grid_sampler

Second, I considered ATen ops as custom ops and tried to proceed with plugin. When defining the grid_sampler function, the name of the graph to be returned was ‘GridSampler’, but the export from the ONNX parser failed. Perhaps this is an ONNX parser issue, but since it’s for converting to TensorRT, I’ve suggested a question here, and if you have a similar case, please advise.

RuntimeError: No Op registered for GridSampler with domain_version of 10

Environment

TensorRT Version: 7.0.0.11
GPU Type: Quadro P5000
Nvidia Driver Version: 440.64
CUDA Version: 10.2
CUDNN Version: 7.6.5
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3.7.3
PyTorch Version (if applicable): 1.5.0

Hi @sungick.kong,

This error states that grid_sampler is an unsupported node for onnx parser.
The below link might be useful to understand the issue.

Thanks!

When I register my own op for Grid Sampler to export the model from torch to onnx, it does not work. Any suggestions on how to overcome this issue?

from torch.onnx import register_custom_op_symbolic

def my_grid_sampler(g, self):
return g.op(“com.microsoft::GridSample”, self)

register_custom_op_symbolic(‘::inverse’, my_inverse, <opset_version>)

register_custom_op_symbolic(‘::grid_sampler’, my_grid_sampler, 10)

import torch
import io
class MyModel(torch.nn.Module):
def forward(self, x, m):
return torch.grid_sampler(x, m,
interpolation_mode=0, padding_mode=0, align_corners=True)

x = torch.arange(4*4).view(1, 1, 4, 4).float()

Create grid to upsample input

d = torch.linspace(-1, 1, 8)
meshx, meshy = torch.meshgrid((d, d))
grid = torch.stack((meshy, meshx), 2)
m = grid.unsqueeze(0) # add batch dim

f = io.BytesIO()
torch.onnx.export(MyModel(), (x,m), f, opset_version=10, custom_opsets={“com.microsoft”: 1})

Hi, Request you to share the ONNX model and the script so that we can assist you better.

Alongside you can try validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).

Alternatively, you can try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec

Thanks!

Hi,

Sorry my model is .pth model and there is torch.nn.functional.grid_sample() function which I cant convert to get ONNX model.

Thanks.

same Question and how to solve it