Unable to convert ONNX model to TensorRT

Unable to convert Onnx model, generated by Pytorch to TensorRT

Hi, I am trying to convert EfficientDet model from this Repo, which is implemented in Pytorch, to TensorRT to deploy on edge devices like Jetson Xavier and Nvidia DeepStream pipeline.
My TensorRT Conversion step is Pytorch => ONNX => TensorRT.

ONNX conversion code:

    # construct dummy data with static batch_size
    x = torch.randn((batch_size, 3, IN_IMAGE_H, IN_IMAGE_W), requires_grad=True).cuda()

    # file name
    onnx_file_name = "EfficientDet{}_{}.onnx".format(coef, batch_size)

    # Export the model
    torch.onnx.export(model,
                      x,
                      onnx_file_name,
                      export_params=True,
                      opset_version=11,
                      do_constant_folding=True,
                      input_names=['input'], output_names=['output'],
                      dynamic_axes=None)

ONNX to TensorRT conversion command:

onnx2trt EfficientDet0_1.onnx -o efficientdet0.trt -d 16

trtexec --onnx=EfficientDet0_1.onnx --saveEngine=efficienteet0.trt --fp16

Pytorch to ONNX conversion went fine but when I try to convert ONNX model to TensorRT with above method, this error occurs:

----------------------------------------------------------------
Input filename:   EfficientDet0_1.onnx
ONNX IR version:  0.0.6
Opset version:    11
Producer name:    pytorch
Producer version: 1.5
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
Parsing model
[2020-07-17 11:49:24 WARNING] [TRT]/home/htut/Desktop/onnx-tensorrt/onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[2020-07-17 11:49:24 WARNING] [TRT]/home/usr/Desktop/onnx-tensorrt/onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
While parsing node number 53 [Pad -> "841"]:
ERROR: /home/usr/Desktop/onnx-tensorrt/builtin_op_importers.cpp:2219 In function importPad:
[8] Assertion failed: inputs.at(1).is_weights()

Environment

TensorRT Version: 7.1
GPU Type: GTX 1070
Nvidia Driver Version: 440
CUDA Version: 10.2
CUDNN Version: 7.6
Operating System + Version: Ubuntu 18.04
Python Version: 3.6.10
PyTorch Version: 1.5.1

onnx model link : https://drive.google.com/file/d/1ZwzjAqaHj0Z3RenVkbC0qYWuSsCv0gbS/view?usp=sharing

How to solve this problem? Thanks in adavance!

P.S

I converted EfficientDet Pytorch pretrained models to ONNX by using this method : link

Hi @Htut,

TRT currently does not support convolutions where the weights are tensors.

Thanks!

Hi @AakankshaS,

Thanks for the reply. Is there anyway to bypass this problem? According to the error message, it seems to be that the error lies in the padding operation so is there any way that I can fix by modifying the model? I am a amateur so I am not sure where and how to modify so that it can be converted to TensorRT.

Hi @Htut,
You could possibly use ONNX Graph Surgeon to replace the padding tensor (if its coming from a constant node) and instead add it as a param to the conv node.
Thanks!

@AakankshaS
Do you have an example how to do this with ONNX Graph Surgeon please?