Hi,
I’m trying to convert a ssd onnx model to trt with onnx2trt exection file.
Because it has NonMaxSuppresion in the model, I made a plugin which inheritances IPluginV2DynamicExt to support dynamic shape.
After NonMaxSuppression it was abort at TopK layer and gives the message as below:
While parsing node number 498 [TopK → “TopK_717”]:
ERROR: /home/u5393118/TensorRT/parsers/onnx/builtin_op_importers.cpp:3283 In function importTopK:
[8] Assertion failed: inputs.at(1).is_weights()
I’m not sure if modify TopK op will solve this issue or I shouldn’t modify it since it was built in op.
Or other solution is recommended?
Thank you.
Environment
TensorRT Version : 7.0.0-1 GPU Type : Tesla V100 Nvidia Driver Version : 450.51.05 CUDA Version : 11.0 CUDNN Version : Operating System + Version : ubuntu 18.04 Python Version (if applicable) : 3.6.9 TensorFlow Version (if applicable) : PyTorch Version (if applicable) : Baremetal or Container (if container which image + tag) :
Hi, I also meat this problem when convert pytorch to trt.
The pytorch model have converted to onnx format and
checked by onnx.checker.check_model with no errors.
And I have updata TensorRT-7 to TensorRT-8, it seams support topk ops. but still have error like this:
ModelImporter.cpp:744: ERROR: builtin_op_importers.cpp:4176 In function importTopK:[8] Assertion failed: (inputs.at(1).is_weights()) && “This version of TensorRT only supports input K as an initializer.”
it seams like initializer problem. Please give me some advice to solve this problem.
This is because your “K” of Topk layer in your model is dynamic value.
For example, your sort something with a rule that output size may be different.
TensorRT runtime only support the K value is constant. It means you must make sure your ouput is not dynamic.
Issue here.
Hi, In my model K of topk layer is a constant value. it’s a int number. I don’t think it’s dynamic. but still has the problem:
ERROR: builtin_op_importers.cpp:4176 In function importTopK:
[8] Assertion failed: (inputs.at(1).is_weights()) && “This version of TensorRT only supports input K as an initializer.”
Have you checked in --verbose mode?
Make sure what happened in which layer.
In my case, sometimes even though onnx show the constant K, but it wasn’t act properly when I converted it.
[05/10/2021-15:02:40] [E] [TRT] ModelImporter.cpp:741: — End node —
[05/10/2021-15:02:40] [E] [TRT] ModelImporter.cpp:744: ERROR: builtin_op_importers.cpp:4176 In function importTopK:
[8] Assertion failed: (inputs.at(1).is_weights()) && “This version of TensorRT only supports input K as an initializer.”
[05/10/2021-15:02:40] [E] Failed to parse onnx file
[05/10/2021-15:02:40] [E] Parsing model failed
Hi, could you check the logs more previous. Check that input shape and output shape in this layer whether they are same as the model showed in netron.
Also check the K in netron if it is constant.
Hello,
Please I am having the same problem.
How do I go about to fix it.
The error is as shown below, thanks
Input filename: /content/nvidia/updated_model2.onnx
ONNX IR version: 0.0.7
Opset version: 11
Producer name:
Producer version:
Domain:
Model version: 0
Doc string:
WARNING: ONNX model has a newer ir_version (0.0.7) than this parser was built against (0.0.6).
[2021-07-26 12:40:26 INFO] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 0, GPU 254 (MiB)
Parsing model
[2021-07-26 12:40:26 WARNING] /content/nvidia/onnx-tensorrt/onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[2021-07-26 12:40:26 WARNING] /content/nvidia/onnx-tensorrt/onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
While parsing node number 255 [TopK → “StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField/TopKV2:0”]:
— Begin node —
input: “StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/Select:0”
input: “Unsqueeze__600:0”
output: “StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField/TopKV2:0”
output: “StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField/TopKV2:1”
name: “StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/SortByField/TopKV2”
op_type: “TopK”
attribute {
name: “sorted”
i: 1
type: INT
}
— End node —
ERROR: /content/nvidia/onnx-tensorrt/builtin_op_importers.cpp:4293 In function importTopK:
[8] Assertion failed: (inputs.at(1).is_weights()) && “This version of TensorRT only supports input K as an initializer.”
I 've been struggling with the same error. Any solution?
Environment
TensorRT Version : 8.0.1.6 GPU Type : Nvidia 2060 Nvidia Driver Version : 470.57.02 CUDA Version : 11.3.1 CUDNN Version :8.2.2 Operating System + Version : ubuntu 20.02 Python Version (if applicable) : 3.8.5