I’m trying to convert an ONNX model (opset=14) generated from a Detectron2 Resnet50 Mask-RCNN saved model to TensorRT 8.4.1 format to be deployed on a Jetson Xavier NX. However, this is not possible because the ONNX model cannot be loaded as node 113 of the graph (a Mod operator) is unsupported in TensorRT. Further details such as the error message and model.onnx file can be found below. What can I do to solve this problem? Write a custom modulo operator for TensorRT? If so, how can I do this?
Environment
TensorRT Version: 8.4.1 (included w/ Jetpack 5.0.2) GPU Type: Jetson Xavier NX 16GB eMMC CUDA Version: 11.4.14 CUDNN Version: 8.4.1 Operating System + Version: Jetson Linux 35.1 Python Version (if applicable): 3.8
Namespace(calib_batch_size=8, calib_cache='./calibration.cache', calib_input=None, calib_num_images=5000, conf_thres=0.4, end2end=False, engine='./trt_models/square_maskrcnn.trt', iou_thres=0.5, max_det=100, onnx='./onnx_models/square_maskrcnn.onnx', precision='fp16', verbose=False, workspace=1)
[12/21/2022-13:25:03] [TRT] [I] [MemUsageChange] Init CUDA: CPU +181, GPU +0, now: CPU 216, GPU 4032 (MiB)
[12/21/2022-13:25:09] [TRT] [I] [MemUsageChange] Init builder kernel library: CPU +131, GPU +139, now: CPU 366, GPU 4188 (MiB)
export.py:109: DeprecationWarning: Use set_memory_pool_limit instead.
self.config.max_workspace_size = workspace * (2 ** 30)
[12/21/2022-13:25:10] [TRT] [W] onnx2trt_utils.cpp:367: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/21/2022-13:25:10] [TRT] [I] No importer registered for op: Mod. Attempting to import as plugin.
[12/21/2022-13:25:10] [TRT] [I] Searching for plugin: Mod, plugin_version: 1, plugin_namespace:
Failed to load ONNX file: /home/shinkeixavier/project/tensorrt/onnx_models/square_maskrcnn.onnx
In node 113 (importFallbackPluginImporter): UNSUPPORTED_NODE: Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"
Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
validating your model with the below snippet
check_model.py
import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!