Error converting torchvision Mask RCNN to TensorRT engine

Description

I’m currently facing an issue to create a TensorRT engine from torchvision MaskRCNN model: `[8] Assertion failed: inputs.at(1).is_weights()

I’m running it in a fresh installation of JetPack 4.4 on Jetson Xavier, I execute docker run --rm -it --runtime nvidia --volume $(pwd):/mnt/torch nvcr.io/nvidia/l4t-pytorch:r32.4.3-pth1.6-py3 /bin/bash and follow the instructions on the https://pytorch.org/docs/stable/torchvision/models.html, MaskRCNN section:

model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True)
model.eval()
x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]
predictions = model(x)
torch.onnx.export(model, x, "mask_rcnn.onnx", opset_version = 11)

With the onnx file created, I run the trtexec and I get this error:

----------------------------------------------------------------
Input filename:   mask_rcnn.onnx
ONNX IR version:  0.0.6
Opset version:    11
Producer name:    pytorch
Producer version: 1.6
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
[07/27/2020-16:44:32] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/27/2020-16:44:32] [W] [TRT] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[07/27/2020-16:44:32] [W] [TRT] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[07/27/2020-16:44:32] [W] [TRT] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
ERROR: builtin_op_importers.cpp:2179 In function importPad:
[8] Assertion failed: inputs.at(1).is_weights()
[07/27/2020-16:44:32] [E] Failed to parse onnx file
[07/27/2020-16:44:32] [E] Parsing model failed
[07/27/2020-16:44:32] [E] Engine creation failed
[07/27/2020-16:44:32] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=mask_rcnn.onnx --explicitBatch

Has somebody else fixed this issue before?

Thanks a lot.

Environment

TensorRT Version: 7.1.3.0
GPU Type: Jetson Xavier
Nvidia Driver Version:
CUDA Version: 10.2
CUDNN Version: 8.0
Operating System + Version: JetPack 4.4
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable): None
PyTorch Version (if applicable): 1.6
Baremetal or Container (if container which image + tag): nvcr.io/nvidia/l4t-pytorch:r32.4.3-pth1.6-py3

Steps To Reproduce

Please follow:

  • Flash Jetson with JetPack 4.4 and install all the packages, cuda, cudnn, etc;
  • docker run --rm -it --runtime nvidia --volume $(pwd):/mnt/torch nvcr.io/nvidia/l4t-pytorch:r32.4.3-pth1.6-py3 /bin/bash
  • in a python3 console, export the ONNX MaskRCNN example from https://pytorch.org/docs/stable/torchvision/models.html
  • And finally /usr/src/tensorrt/bin/trtexec --onnx=mask_rcnn.onnx --explicitBatch

Looks like the issue is with weights, and TRT currently does not support convolutions where the weights are tensors.
Please refer to the below post.


Thanks!

Thank you! Is there a way to convert it properly? I see you mentioned that’d be possible using the ONNX Surgeon, however, if you could provide some example that’d be very valuable for myself, as I’m dealing with this issue for couple days already and I’m quite new in this area.

Perhaps ONNX optimizer does that? Any feedback?

Thanks a lot!

Hi, additionally, I’ve built the TensorRT from master and now I get this error (with the same model)

While parsing node number 64 [Resize -> "373"]:
ERROR: /mnt/torch/TensorRT/parsers/onnx/ModelImporter.cpp:124 In function parseGraph:
[5] Assertion failed: ctx->tensors().count(inputName)

BR,
Sidnei

Hi @sidneib
,
Can you please help me with your onnx model?

I cant access this page for some reason.
Thanks!

It seems PyTorch DOC is offline at the moment.

If you can share your onnx model in the meantime, so that i can check on this?
Thanks!

Hi @AakankshaS the easiest way to get the model is to run this code

import torch
import torchvision
model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True)
model.eval()
x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]
predictions = model(x)
torch.onnx.export(model, x, "mask_rcnn.onnx", opset_version = 11)

You can use the latest pytorch docker image from NVIDIA catalog.

Hi @AakankshaS, do you have any update how could I overcome this issue?

Hi ,

I could not reproduce this issue.
However, below link might address your concern.


Thanks!