Importing convolution layers from onnx, with tensor inputs and tensor weights

Hi,

I am trying to import an onnx model from pytorch, which has a conv layer, that takes tensors for both weights and inputs. Tensorrt fails to convert the model,

ERROR: /home/erisuser/p4sw/sw/gpgpu/MachineLearning/DIT/release/5.0/parsers/onnxOpenSource/builtin_op_importers.cpp:517 In function importConv:
[8] Assertion failed: inputs.at(1).is_weights()
failed to parse onnx file

Is this a problem with onnx importer, or Tensorrt?
If Tensorrt does not support such conv layers, is there an example of a custom layer that does this?

Hello,

tensorrt should be able to parse cnn models.
to help us debug, can you share a small repor containing your conversion source, and onnx model that demonstrate the errors you are seeing?

can you also provide details on the platforms you are using?

Linux distro and version
GPU type
nvidia driver version
CUDA version
CUDNN version
Python version [if using python]
Tensorflow version
TensorRT version

Hi,

Here is an example onnx model that Tensorrt can not seem to parse.

https://drive.google.com/file/d/1Xq2OEDtVh8ogeWjj-9OHr78io2jYlggM/view?usp=sharing

Here is the output from trtexec

./trtexec --onnx=model.onnx 
onnx: model.onnx
----------------------------------------------------------------
Input filename:   model.onnx
ONNX IR version:  0.0.3
Opset version:    9
Producer name:    pytorch
Producer version: 0.4
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
While parsing node number 0 [Conv -> "2"]:
ERROR: /home/erisuser/p4sw/sw/gpgpu/MachineLearning/DIT/release/5.0/parsers/onnxOpenSource/builtin_op_importers.cpp:517 In function importConv:
[8] Assertion failed: inputs.at(1).is_weights()
failed to parse onnx file
Engine could not be created
Engine could not be created

I have also tried doing the same with Tensorrt 4, but got the same result.
I am using an Nvidia GTX 970 on ubuntu 16.04.

Hello,

per engineering, TRT does not currently support convolutions where the weights are tensors. What’s the use case for that?

Hi,

I have a custom network, for online detection which requires something like this. Is there any way to create a custom cudnn layer for tensorrt?

Hello.

Any news from NVIDIA about this feature? I’m also trying to use a tensor as weight for a convolution. In which release this feature will be implemented?

BR

Hi,

As per the answer, TensorRT doesn’t currently support this. What I ended up doing is implementing those layers in cudnn separately. This approach seems to work fine, without much overhead.

Hi,

I’m facing a similar issue using the latest tensorRT 6 and the latest converter (GitHub - onnx/onnx-tensorrt: ONNX-TensorRT: TensorRT backend for ONNX) as included in (GitHub - NVIDIA/TensorRT: TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.). so I think, such layers still aren’t supported yet.

Since I don’t have experience with CUDA / cuDNN programming nor with custom layers…
Does anyone (maybe @sergey.hovakimyan) have a guideline to follow?
I’m currently reading the docu but some hints in the context of the converter (onnx2trt or trtexec) would be nice.