TensorRT only supports input K as an initializer

Description

I trained a model using tensorflow (v2) detection API.
Then I exported to ONNX.
I then tried to convert it to tensorrt on NVIDIA Jetson.
This failed due to tensorrt not recognising NonMaximumSuppresion.
Finally I installed tensorrt v8.0.1.6, tensorrt OSS and the latest onnx-tensorrt on a separate machine.
After successfully building, I tried converting the model to TRT.
Since then it consistently gives this error.


Input filename: /content/nvidia/face-model2-float-onnxsim.onnx
ONNX IR version: 0.0.7
Opset version: 11
Producer name:
Producer version:
Domain:
Model version: 0
Doc string:

WARNING: ONNX model has a newer ir_version (0.0.7) than this parser was built against (0.0.6).
[2021-07-26 14:57:16 INFO] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 0, GPU 254 (MiB)
Parsing model
[2021-07-26 14:57:16 WARNING] /content/nvidia/onnx-tensorrt/onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[2021-07-26 14:57:16 WARNING] /content/nvidia/onnx-tensorrt/onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
While parsing node number 255 [TopK → “StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField/TopKV2:0”]:
— Begin node —
input: “StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/Select:0”
input: “Unsqueeze__600:0”
output: “StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField/TopKV2:0”
output: “StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField/TopKV2:1”
name: “StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField/TopKV2”
op_type: “TopK”
attribute {
name: “sorted”
i: 1
type: INT
}

— End node —
ERROR: /content/nvidia/onnx-tensorrt/builtin_op_importers.cpp:4293 In function importTopK:
[8] Assertion failed: (inputs.at(1).is_weights()) && “This version of TensorRT only supports input K as an initializer.”

I have used onnx-simplifier yet it the error persists. I have also used polygraphy surgeon all to no avail.

Environment

TensorRT Version: * TensorRT 8.0.1.6
GPU Type:
Nvidia Driver Version: Driver Version: 460.32.03
CUDA Version: 11.2
CUDNN Version: CUDNN_MAJOR 7
Operating System + Version: Ubuntu 18.04.5 LTS
Python Version (if applicable): 3.7.11
TensorFlow Version (if applicable): 2.5.0
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Attached is the notebook I used for the experiments, also contained are the converted models.
https://drive.google.com/drive/folders/1mqETw7Ltjcwf8wgAsV3n6SP9Kx0v3xg6?usp=sharing

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Your help will be most appreciated.

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Hi,

Thanks, the onnx model and code I used are here models - Google Drive

Thanks in anticipation

Hi,
Please I am still waiting for your help.
I have attached the model.
Again it is here.
face-model2-float-onnxsim.onnx (17.9 MB)

Thanks so much

Hi @source821,

Our team is looking into this issue. Please allow us some time to get back on this.

Thank you.

@source821,

Based on the information provided above, looks like you are trying to convert a model from Tensorflow Object Detection API. When we checked the onnx file, and as far as we can tell, it looks to be an SSD Mobilenet V2 320x320 model.

Please refer TensorRT/samples/python/tensorflow_object_detection_api. This repo helps in converting most of the models trained via TF OD.
This work is still in progress, and soon will be available in TRT OSS repo. As of now it is a private repo, you can DM me to provide the access.

Depending on how we trained a network, it is highly recommended to use TFOD export script. The next step would be using create_onnx.py from above shared project, if it is SSD Mobilenet V2 320x320 model sample command would be

python create_onnx.py --pipeline_config /path_to/pipeline.config --saved_model /path_to/saved_model --onnx /dir_to_save/model.onnx --batch_size 1

This should automatically build ONNX if model is supported.
The next step would be to build_engine.py, sample command is

python build_engine.py --onnx /dir_to_saved/model.onnx --engine /dir_to_save/engine.trt --precision fp32 --verbose

Thank you.

Hi @spolisetty

Thanks. Unfortunately the URL is not accessible.

Can you share the URL again please.

Thanks a lot

@source821,

Please DM me with your github id/email. Will provide you the access.

Hi @spolisetty
Thanks for the code.
Do you have any idea when we can start using EfficientNMS on Jetson? or can you suggest a work around?

Thanks

Hi,

Regarding above, we recommend you to post your query on Jetson related forum to get better help.

Thank you.