Support for gather with dynamic shapes

Description

I want to convert my trained model and optimize inference with TensorRT 8.0.
For this I use the following conversion flow: Pytorch → ONNX → TensorRT

The ONNX model can be successfully runned with onxxruntime-gpu, but failed with conversion from ONNX to TensorRT with trtexec.

From debugging, I have found the problem place which is related with following original Pytorch code:

def sample_points(points, idx):
idx = idx.view(-1).unsqueeze(1)
index = idx.expand(-1, points.size(-1))
res = torch.gather(points, 0, index)
return res.reshape(points.size(0), -1, points.size(1))

This sample_points function is used as intermediate operation inside NN. Seems like problem is connected that I use torch.size to initialize dimension and produces non-static, dynamic input data for torch.gather.

Does anyone know how to solve this problem? I very need your help. Many thanks!

Environment

TensorRT Version 8.0:
GPU Type NVIDIA RTX 2080 Ti
Nvidia Driver Version 450.5106
CUDA Version 11.0
Operating System + Version Ubuntu 18.04 LTS
Python Version 3.6.10
PyTorch Version 1.9.0

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Hi, NVES. Thanks for your response!
I checked and validated my model with onnx.checker.check_model and it works well without any warning messages.
After, when I run trtexec optimization process failed with following message:

[W] [TRT] onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[W] [TRT] onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
[W] [TRT] onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
[E] [TRT] ModelImporter.cpp:720: While parsing node number 81 [GatherElements → “105”]:
[E] [TRT] ModelImporter.cpp:721: — Begin node —
[E] [TRT] ModelImporter.cpp:722: input: “83”
input: “104”
output: “105”
name: “GatherElements_81”
op_type: “GatherElements”
attribute {
name: “axis”
i: 0
type: INT
}
[E] [TRT] ModelImporter.cpp:723: — End node —
[E] [TRT] ModelImporter.cpp:726: ERROR: builtin_op_importers.cpp:1379 In function importGatherElements:
[8] Assertion failed: !isDynamic(daDims) && !isDynamic(idxDims) && “This version of TenosrRT does not support GatherElements on dynamic shapes!”
[E] Failed to parse onnx file
[E] Parsing model failed
[E] Engine creation failed
[E] Engine set up failed

Additionally, I’ve added log file with --verbose from trtexec and ONNX model.
trt_exec_log.txt (46.8 KB)
model_knn.onnx (155.8 KB)

Hi @mityaginkir ,
In Current TRT release, GatherElements does not support dynamic shapes. This fix will be available in future release.
Please stay tuned for the updates.

Thanks!

As of 2023, is this feature now supported?

1 Like