Convert onnx to engine fail on Tensorrt7.1.3.4


TensorRT Version:
GPU Type: Tesla T4
Nvidia Driver Version: 440.33.01
CUDA Version: 10.2
CUDNN Version: 8.0.1
Operating System + Version: Ubuntu 7.5.0-3ubuntu1~18.04
Python Version (if applicable): 3.7.7
PyTorch Version (if applicable): 1.3

Steps To Reproduce

1、generate onnx
import torch
from torchvision import models
import os
import sys
shufflenet = models.shufflenet_v2_x1_0(pretrained=False)
shufflenet.fc = torch.nn.Linear(shufflenet._stage_out_channels[-1], 512)
model = shufflenet
dummy_input = torch.randn(1,3,224,224)
dynamic_axes={“input”:{0:“batchsize”}, “output”:{0:“batchsize”}}
torch.onnx.export(model, dummy_input, “shufflenet.onnx”, verbose=False, opset_version=10, input_names=input_names, output_names=output_names, dynamic_axes=dynamic_axes)

2、convert engine
trtexec --onnx=shufflenet.onnx --explicitBatch --minShapes=input:1x3x224x224 --optShapes=input:4x3x224x224 --maxShapes=input:4x3x224x224

wrong info
[07/28/2020-10:52:23] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/28/2020-10:52:26] [I] [TRT] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[07/28/2020-10:52:29] [E] [TRT] …/builder/cudnnBuilderUtils.cpp (427) - Cuda Error in findFastestTactic: 700 (an illegal memory access was encountered)
[07/28/2020-10:52:29] [E] [TRT] …/rtSafe/safeRuntime.cpp (32) - Cuda Error in free: 700 (an illegal memory access was encountered)
terminate called after throwing an instance of ‘nvinfer1::CudaError’
what(): std::exception
^CAborted (core dumped)

Hi @taoze_happy.
Your current model is working for batch size =1
However If you want to make it work for other batch values, then update all the Reshape ops with correct shape value.


thank you