Mod operator unsupported in TensorRT 8.4.1 (included w/ Jetpack 5.0.2)

Description

I’m trying to convert an ONNX model (opset=14) generated from a Detectron2 Resnet50 Mask-RCNN saved model to TensorRT 8.4.1 format to be deployed on a Jetson Xavier NX. However, this is not possible because the ONNX model cannot be loaded as node 113 of the graph (a Mod operator) is unsupported in TensorRT. Further details such as the error message and model.onnx file can be found below. What can I do to solve this problem? Write a custom modulo operator for TensorRT? If so, how can I do this?

Environment

TensorRT Version: 8.4.1 (included w/ Jetpack 5.0.2)
GPU Type: Jetson Xavier NX 16GB eMMC
CUDA Version: 11.4.14
CUDNN Version: 8.4.1
Operating System + Version: Jetson Linux 35.1
Python Version (if applicable): 3.8

Relevant Files

ONNX model: model.onnx - Google Drive

Error Message

Namespace(calib_batch_size=8, calib_cache='./calibration.cache', calib_input=None, calib_num_images=5000, conf_thres=0.4, end2end=False, engine='./trt_models/square_maskrcnn.trt', iou_thres=0.5, max_det=100, onnx='./onnx_models/square_maskrcnn.onnx', precision='fp16', verbose=False, workspace=1)
[12/21/2022-13:25:03] [TRT] [I] [MemUsageChange] Init CUDA: CPU +181, GPU +0, now: CPU 216, GPU 4032 (MiB)
[12/21/2022-13:25:09] [TRT] [I] [MemUsageChange] Init builder kernel library: CPU +131, GPU +139, now: CPU 366, GPU 4188 (MiB)
export.py:109: DeprecationWarning: Use set_memory_pool_limit instead.
  self.config.max_workspace_size = workspace * (2 ** 30)
[12/21/2022-13:25:10] [TRT] [W] onnx2trt_utils.cpp:367: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/21/2022-13:25:10] [TRT] [I] No importer registered for op: Mod. Attempting to import as plugin.
[12/21/2022-13:25:10] [TRT] [I] Searching for plugin: Mod, plugin_version: 1, plugin_namespace:
Failed to load ONNX file: /home/shinkeixavier/project/tensorrt/onnx_models/square_maskrcnn.onnx
In node 113 (importFallbackPluginImporter): UNSUPPORTED_NODE: Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"






Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

I have shared the onnx model as a gdrive download. Can you try to replicate the error with the specs I provided?

Hi,

Please try on the latest TensorRT version 8.5.1. We are facing a different error as following.

[12/26/2022-04:49:31] [V] [TRT] Searching for input: onnx::TopK_1890
[12/26/2022-04:49:31] [V] [TRT] TopK_1411 [TopK] inputs: [onnx::Shape_894 -> (1, 338688)[FLOAT]], [onnx::TopK_1890 -> (1)[INT32]],
[12/26/2022-04:49:31] [E] [TRT] parsers/onnx/ModelImporter.cpp:726: While parsing node number 1411 [TopK -> "onnx::Concat_1891"]:
[12/26/2022-04:49:31] [E] [TRT] parsers/onnx/ModelImporter.cpp:727: --- Begin node ---
[12/26/2022-04:49:31] [E] [TRT] parsers/onnx/ModelImporter.cpp:728: input: "onnx::Shape_894"
input: "onnx::TopK_1890"
output: "onnx::Concat_1891"
output: "onnx::Add_1892"
name: "TopK_1411"
op_type: "TopK"
attribute {
  name: "axis"
  i: 1
  type: INT
}
attribute {
  name: "largest"
  i: 1
  type: INT
}
attribute {
  name: "sorted"
  i: 1
  type: INT
}

[12/26/2022-04:49:31] [E] [TRT] parsers/onnx/ModelImporter.cpp:729: --- End node ---
[12/26/2022-04:49:31] [E] [TRT] parsers/onnx/ModelImporter.cpp:731: ERROR: parsers/onnx/ModelImporter.cpp:168 In function parseGraph:
[6] Invalid Node - TopK_1411
This version of TensorRT only supports input K as an initializer. Try applying constant folding on the model using Polygraphy: https://github.com/NVIDIA/TensorRT/tree/master/tools/Polygraphy/examples/cli/surgeon/02_folding_constants
[12/26/2022-04:49:31] [E] Failed to parse onnx file
[12/26/2022-04:49:31] [I] Finish parsing network model
[12/26/2022-04:49:31] [E] Parsing model failed
[12/26/2022-04:49:31] [E] Failed to create engine from model or file.
[12/26/2022-04:49:31] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8501] # trtexec --onnx=model.onnx --verbose

The dynamic K value is currently not supported. Which will be fixed in the future releases.

Please try applying constant folding on the model using Polygraphy: TensorRT/tools/Polygraphy/examples/cli/surgeon/02_folding_constants at master · NVIDIA/TensorRT · GitHub

Thank you.

Is the error shown above produced with TensorRT 8.5.1?

Yes.