Convert onnx model to TRT

Description

I was trying to convert MDNet model to TRT engine. I can successfully output onnx model. However, whatever I used onnx2trt or onnx parser to convert the trt engine. It got this error.

[2020-04-16 09:56:51 WARNING] /home/chieh/github/onnx-tensorrt/onnx2trt_utils.cpp:235: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
While parsing node number 25 [Pad -> "68"]:
ERROR: /home/chieh/github/onnx-tensorrt/builtin_op_importers.cpp:2100 In function importPad:
[8] Assertion failed: onnx_padding.size() == 8 && onnx_padding[0] == 0 && onnx_padding[1] == 0 && onnx_padding[4] == 0 && onnx_padding[5] == 0 && "This version of TensorRT only supports padding on the outer two dimensions on 4D tensors!"

I tested the opset version 10 and 11, but I got the same error.

Here is the partial code of MDNet model.


class MDNet(nn.Module):
    def __init__(self, model_path=None, K=1):
        super(MDNet, self).__init__()
        self.K = K
        self.layers = nn.Sequential(OrderedDict([
                ('conv1', nn.Sequential(nn.Conv2d(3, 96, kernel_size=7, stride=2),
                                        nn.ReLU(inplace=True),
                                        nn.LocalResponseNorm(2),
                                        nn.MaxPool2d(kernel_size=3, stride=2))),
                ('conv2', nn.Sequential(nn.Conv2d(96, 256, kernel_size=5, stride=2),
                                        nn.ReLU(inplace=True),
                                        nn.LocalResponseNorm(2),
                                        nn.MaxPool2d(kernel_size=3, stride=2))),
                ('conv3', nn.Sequential(nn.Conv2d(256, 512, kernel_size=3, stride=1),
                                        nn.ReLU(inplace=True))),
                ('fc4',   nn.Sequential(nn.Linear(512 * 3 * 3, 512),
                                        nn.ReLU(inplace=True))),
                ('fc5',   nn.Sequential(nn.Dropout(0.5),
                                        nn.Linear(512, 512),
                                        nn.ReLU(inplace=True)))]))

        self.branches = nn.ModuleList([nn.Sequential(nn.Dropout(0.5),
                                                     nn.Linear(512, 2)) for _ in range(K)])

        for m in self.layers.modules():
            if isinstance(m, nn.Linear):
                nn.init.normal_(m.weight, 0, 0.01)
                nn.init.constant_(m.bias, 0.1)
        for m in self.branches.modules():
            if isinstance(m, nn.Linear):
                nn.init.normal_(m.weight, 0, 0.01)
                nn.init.constant_(m.bias, 0)

        if model_path is not None:
            if os.path.splitext(model_path)[1] == '.pth':
                self.load_model(model_path)
            elif os.path.splitext(model_path)[1] == '.mat':
                self.load_mat_model(model_path)
            else:
                raise RuntimeError('Unkown model format: {:s}'.format(model_path))
        self.build_param_dict()

Environment

TensorRT version: 7.0.0.11
Cuda version: 10.0
TensorFlow-gpu: 1.14.0
Cudnn version: 7.6.5
GPU: GTX1060
Ubuntu: 18.04
Python 3.6.9
Nvidia driver version: 440.33.01
PyTorch version: 1.3.1

Steps To Reproduce

  1. git clone https://github.com/hyeonseobnam/py-MDNet
  2. Add there lines in the py-MDNet/pretrain/train_mdnet.pybefore if __name__ == "__main__"
    x = torch.randn(1, 3, 107, 107, requires_grad=True).cuda()  # Input
    torch.onnx.export(model,               # model being run
                    x,                         # model input (or a tuple for multiple inputs)
                    "MDNet.onnx",   # where to save the model (can be a file or file-like object)
                    export_params=True,        # store the trained parameter weights inside the model file
                    opset_version=11,          # the ONNX version to export the model to
                    do_constant_folding=True,  # whether to execute constant folding for optimization
                    input_names = ['input'],   # the model's input names
                    output_names = ['output']) # the model's output names
  1. Downlaod the vot dataset from here.
  2. Use this command:
python3 pretrain/train_mdnet.py -d vot 

After we get the onnx model, we can convert to trt engine.

onnx2trt MDNet.onnx -o model.trt

Thank you!

Hi,

Conditions And Limitations of padding layer:

  • A must have three dimensions or more.
  • The padding can only be applied along the two innermost dimensions.
  • Only zero-padding is supported.

Please refer to below link:

Thanks