Description
I trained a model on a Linux server with mmdetection and export onnx model, and I need to deploy it on windows10.
I meet this error when I try to build the trt engine:
[8] Assertion failed: (inputs.at(1).is_weights()) && “This version of TensorRT only supports input K as an initializer.”
how should I fix it?
and I wish to know if I can convert it to trt engine on Linux and then deploy it on Windows using C++, is there any difference if I convert it on Linux or on Windows?(with the same CUDA/tensorRT version.
Environment
TensorRT Version: 8.0.16
GPU Type: RTX3060 laptop
Nvidia Driver Version: 477
CUDA Version: 11.1
CUDNN Version: 8.2
Operating System + Version: win10
Python Version (if applicable): 3.8
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.9
Baremetal or Container (if container which image + tag):
Relevant Files
Steps To Reproduce
[8] Assertion failed: (inputs.at(1).is_weights()) && “This version of TensorRT only supports input K as an initializer.”
detail:
[09/09/2021-16:21:18] [E] [TRT] ModelImporter.cpp:720: While parsing node number 1309 [TopK → “2320”]:
[09/09/2021-16:21:18] [E] [TRT] ModelImporter.cpp:721: — Begin node —
[09/09/2021-16:21:18] [E] [TRT] ModelImporter.cpp:722: input: “2301”
input: “2319”
output: “2320”
output: “2321”
name: “TopK_1309”
op_type: “TopK”
attribute {
name: “axis”
i: -1
type: INT
}
attribute {
name: “largest”
i: 1
type: INT
}
attribute {
name: “sorted”
i: 1
type: INT
}
[09/09/2021-16:21:18] [E] [TRT] ModelImporter.cpp:723: — End node —
[09/09/2021-16:21:18] [E] [TRT] ModelImporter.cpp:726: ERROR: builtin_op_importers.cpp:4292 In function importTopK:
[8] Assertion failed: (inputs.at(1).is_weights()) && “This version of TensorRT only supports input K as an initializer.”
&&&& FAILED TensorRT_Onnx.demo [TensorRT v8001] # D:\Release\OnnxTrtDemo.exe