How to run Pytorch trained MaskRCNN (detectron2) with TensorRT

I’m looking to deploy a detectron2/MasRCNN model using TensorRT on a Jetson AGX Xavier. So far I haven’t been successful importing the model to TensorRT.

By default detectron2 exports to caffe2, however TensorRT doesn’t appear to support caffe2 as an input. I’ve been able to export MaskRCNN to onnx, but am getting errors using the latest onnx2trt

Parsing model
Unsupported ONNX data type: UINT8 (2)
ERROR: data:192 In function importInput:
[8] Assertion failed: convertDtype(onnxDtype.elem_type(), &trtDtype)

I’m still exploring a few other options for importing the model to TensorRT, but things don’t look promising. I’m hoping someone on here has either done this before, or is knowledgeable enough to provide a better path forward. Currently I’m looking to see why the model contains uint8’s and convert them.

I am aware of: https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleUffMaskRCNN but am looking to find a solution based on detectron2. If switching to tensorflow is the simplest option however, then I’ll go that route.

Thank you in advance for any help!

Hi,

caffe2 is included in the pyTorch package.
Please find the following topic for the prebuilt pytorch package for the Jetson device:

Unsupported ONNX data type: UINT8 (2)

Above error indicates that ONNX model use the non-supported INT8 data format.
However, INT8 is supported on the Xavier platform.

We are going to check this issue.
Will share more information with you later.

Thanks.