Nvidia Jetson TX2 mrcnn onnx to TRT issue

Title: Issue with Converting Mask R-CNN Model to TensorRT Engine on Jetson TX2

I’m encountering an error while attempting to convert my Mask R-CNN model into a TensorRT engine on a Jetson TX2 device. Here’s a breakdown of the problem:

Error Message:

ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:4519 In function importTopK:
[8] Assertion failed: (inputs.at(1).is_weights()) && "This version of TensorRT only supports input K as an initializer."

Setup Details:

  • Jetson TX2 device
  • TensorRT version: 8.2.1
  • CUDA version: Cuda compilation tools, release 10.2, V10.2.300

Steps Taken:

  1. Successfully generated the .onnx file for the Mask R-CNN model.
  2. Validated the .onnx file using onnx.checker.check_model() without any issues.
  3. Attempted to convert the .onnx file to a .trt file using the following command:
    ./trtexec --onnx=/path/to/mask_rcnn_model.onnx --saveEngine=model.trt

Issue Details:
The conversion process encounters an error related to the “TopK” operation in the ONNX model. The error message suggests that TensorRT expects the input K to be provided as an initializer, but it seems it’s not provided correctly in the model.

Any insights or suggestions on resolving this issue would be greatly appreciated. Thank you!


What is the original model format? TensorFlow, PyTorch, or other frameworks?

If TensorFlow is used, we do have an example to convert it into TensorRT.
Please follow the steps and let us know if you meet some issues:


Thank you for your reply. I am using mrcnn which is a tensor flow model with a keras engine. Here is the link to the git : MRCNN – matterport.

I have already checked out the link you sent but the model I am using is not in the compatibility list.

additional informations

  • I did the export in opset 11
  • When running the model in training or during export I had a few Warning about methods in the keras and mrcnn libraries so in an attempt to fix my issues I downloaded the libraries from the git releases and used them as imports instead and then I changed all the methods generating warnings to tf.compat.v1.{original_method}
  • The issue seemed to come from tf.nn.Top_K so after changing the tf.compat.v1 methods I changed the tf.nn.top_K to tf.math.top_k according to this source : top_k tensorflow

onnx export

To export the model to onnx, after training it I load the model using the intended class:

model = modellib.MaskRCNN(mode="inference", config=ModelConfig(), model_dir=MODEL_LOGS_DIR)

I then get the keras model:

keras_model = model.keras_model

Finally I use TF2ONNX like so :

dummy_input = np.zeros((1, ModelConfig.IMAGE_MIN_DIM, ModelConfig.IMAGE_MAX_DIM, 3), dtype=np.float32)

# Convert to ONNX using tf2onnx
model_proto, _ = tf2onnx.convert.from_keras(loaded_keras_model, input_signature=dummy_input, opset=11, output_path=MODEL_SAVE_PATH)

Libraries version

It took me a while to find the correct versions to get the onnx export, here is my pip freeze:
requirements.txt (2.8 KB)

I am using python 3.7.9


Could you try to folding the constant to see if it helps?