Description
I’m currently facing an issue to create a TensorRT engine from torchvision MaskRCNN model: `[8] Assertion failed: inputs.at(1).is_weights()
I’m running it in a fresh installation of JetPack 4.4 on Jetson Xavier, I execute docker run --rm -it --runtime nvidia --volume $(pwd):/mnt/torch nvcr.io/nvidia/l4t-pytorch:r32.4.3-pth1.6-py3 /bin/bash and follow the instructions on the https://pytorch.org/docs/stable/torchvision/models.html, MaskRCNN section:
model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True)
model.eval()
x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]
predictions = model(x)
torch.onnx.export(model, x, "mask_rcnn.onnx", opset_version = 11)
With the onnx file created, I run the trtexec and I get this error:
----------------------------------------------------------------
Input filename: mask_rcnn.onnx
ONNX IR version: 0.0.6
Opset version: 11
Producer name: pytorch
Producer version: 1.6
Domain:
Model version: 0
Doc string:
----------------------------------------------------------------
[07/27/2020-16:44:32] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/27/2020-16:44:32] [W] [TRT] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[07/27/2020-16:44:32] [W] [TRT] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[07/27/2020-16:44:32] [W] [TRT] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
ERROR: builtin_op_importers.cpp:2179 In function importPad:
[8] Assertion failed: inputs.at(1).is_weights()
[07/27/2020-16:44:32] [E] Failed to parse onnx file
[07/27/2020-16:44:32] [E] Parsing model failed
[07/27/2020-16:44:32] [E] Engine creation failed
[07/27/2020-16:44:32] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=mask_rcnn.onnx --explicitBatch
Has somebody else fixed this issue before?
Thanks a lot.
Environment
TensorRT Version: 7.1.3.0
GPU Type: Jetson Xavier
Nvidia Driver Version:
CUDA Version: 10.2
CUDNN Version: 8.0
Operating System + Version: JetPack 4.4
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable): None
PyTorch Version (if applicable): 1.6
Baremetal or Container (if container which image + tag): nvcr.io/nvidia/l4t-pytorch:r32.4.3-pth1.6-py3
Steps To Reproduce
Please follow:
- Flash Jetson with JetPack 4.4 and install all the packages, cuda, cudnn, etc;
docker run --rm -it --runtime nvidia --volume $(pwd):/mnt/torch nvcr.io/nvidia/l4t-pytorch:r32.4.3-pth1.6-py3 /bin/bash- in a
python3console, export the ONNX MaskRCNN example fromhttps://pytorch.org/docs/stable/torchvision/models.html - And finally
/usr/src/tensorrt/bin/trtexec --onnx=mask_rcnn.onnx --explicitBatch