nvonnxparser::IParse::parse() fail,and trt report paramenter check fail


I tried to test BackgroundMattingV2 's onnx model on TensorRT platform, but the function “nvonnxparser::IParse::parse()” return fail,and trt report below errors:

TensorRT_ERROR: Parameter check failed at: Layers.cpp::nvinfer1::TopKLayer::TopKLayer::3528, condition: k > 0 && k <= MAX_TOPK_K
TensorRT_INTERNAL_ERROR: Assertion failed: mParams.k > 0

so, how can I fix these erros?


TensorRT Version: v7.2.3.4
GPU Type: GTX2070
Nvidia Driver Version:
CUDA Version: 11.1
CUDNN Version:
Operating System + Version: Windows 10
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet


import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
2) Try running your model with trtexec command.
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging

testBackgroundMattingV2.cpp (4.5 KB)
onnx_mobilenetv2_hd.onnx (19.1 MB)
trtexec_log.txt (279.7 KB)

Ok, I had upload the onnx model,my cpp code to load model, and the trtexec report, please check it,thanks!

by the way, the TensorRT and cuDNN version I filled before is wrong, I had rewrite


Please refer below doc and make sure you’re using K value is 3840 or less and greater than 0.

And also recommend you to try on latest TensorRT version. Please let us know if you still face this issue.

Thank you.

I had export their original pth model to onnx format on pytorch v1.9.0 with different configs,but regardless of which of the exported model, the “parse()” function always report below errors:

Input filename: G:\AI\PretrainedModel\BackgroundMattingV2\Onnx\resnet101.onnx
ONNX IR version: 0.0.6
Opset version: 12
Producer name: pytorch
Producer version: 1.9
Model version: 0
Doc string:

TensorRT_WARNING: onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
TensorRT_WARNING: onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
TensorRT_ERROR: INVALID_ARGUMENT: getPluginCreator could not find plugin ScatterND version 1
ERROR: builtin_op_importers.cpp:3773 In function importFallbackPluginImporter:
[8] Assertion failed: creator && “Plugin not found, are the plugin name, version, and namespace correct?”
Assertion failed: false, file G:\VC15\QuickBroadCast\test-app\testBackgroundMattingV2\testBackgroundMattingV2.cpp, line 115

I also download the newest v8.0.1 TensorRT to do my test,but “parse()” function report below errors:

TensorRT_WARNING: onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
TensorRT_ERROR: [graph.cpp::nvinfer1::builder::Node::computeInputExecutionUses::519] Error Code 9: Internal Error (Floor_15: IUnaryLayer cannot be used to compute a shape tensor)
TensorRT_ERROR: ModelImporter.cpp:720: While parsing node number 28 [Resize → “412”]:
TensorRT_ERROR: ModelImporter.cpp:721: — Begin node —
TensorRT_ERROR: ModelImporter.cpp:722: input: “src”
input: “403”
input: “411”
input: “410”
output: “412”
name: “Resize_28”
op_type: “Resize”
attribute {
name: “coordinate_transformation_mode”
s: “pytorch_half_pixel”
type: STRING
attribute {
name: “cubic_coeff_a”
f: -0.75
type: FLOAT
attribute {
name: “mode”
s: “linear”
type: STRING
attribute {
name: “nearest_mode”
s: “floor”
type: STRING

TensorRT_ERROR: ModelImporter.cpp:723: — End node —
TensorRT_ERROR: ModelImporter.cpp:726: ERROR: ModelImporter.cpp:179 In function parseGraph:

1 Like

Is this model still possible to run on TensorRT?

HI @pango99,

Looks like you’re using unsupported op in your model, Please refer below docs to check supported operators by TensorRT.

Please refer to below links related custom plugin implementation and sample: