I use the following code to get onnx file and trtexec (
trtexec --onnx=tmp.onnx --fp16) to get trt file. Then a problem arose.
In the code, conv kernel is a dynamic input so I cannot replace it with nn.Conv2d. It seems that tensorrt only supports fixed kernel. Is there any solution to deal with a dynamic kernel when converting F.conv2d to tensorrt? I would be very grateful if any help is provided.
import torch import torch.nn as nn import torch.nn.functional as F class Conv(nn.Module): def __init__(self): super(Conv, self).__init__() def forward(self, x, kernel): return F.conv2d(x, kernel, groups=256) model = Conv() dummy_input = (torch.randn([1, 256, 21, 21]), torch.randn([256, 1, 4, 4])) print(model(*dummy_input).size()) model_path = 'tmp.onnx' torch.onnx.export(model, dummy_input, model_path, verbose=True, export_params=True)`
The problem is as follows:
[09/06/2021-15:10:02] [E] [TRT] Conv_0: kernel weights has count 0 but 4096 was expected [09/06/2021-15:10:02] [E] [TRT] Conv_0: count of 0 weights in kernel, but kernel dimensions (4,4) with 256 input channels, 256 output channels and 256 groups were specified. Expected Weights count is 256 * 4*4 * 256 / 256 = 4096
TensorRT Version: 188.8.131.52
GPU Type: V100
Nvidia Driver Version: 440.33.01
CUDA Version: 10.2
CUDNN Version: 8.0.2
Operating System + Version: CentOS
Python Version (if applicable): 3.6
PyTorch Version (if applicable): 1.6