Onnx-tensortrt Windows make-j issue

Description

So I’ve built onnxtrt well using:
cmake -G “MinGW Makefiles” … -DTENSORRT_ROOT=C:\TensorRT-8.2.4.2 and it built successfully
I installed protobuf using vckpg and also by extracting from protobuf/releases and adding protoc.exe bin file to path as well
both results gave me same error but with differenet paths
However when “make-j”
C:\onnx-tensorrt\build>make -j
I’ve attached the make -j attempt at error.txt

Environment

TensorRT Version: 8.2.4.2
GPU Type: Nvidia 1060 MaxQ
Nvidia Driver Version: 512.59
CUDA Version: 11.4
CUDNN Version:
Operating System + Version: Windows 10
Python Version (if applicable): 11.4
TensorFlow Version (if applicable): 2.8
PyTorch Version (if applicable): not used
Baremetal or Container (if container which image + tag):

Relevant Files

error.txt (71.3 KB)

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

model.onnx (11.7 MB)
Thats the onnx model, Another note to be added, this .onnx was .pb file, built from mobilenetV2 ssd using tensorflow 2.5.0, This model was created by using Tensorflow Object Detection API. I used the model that’s within the same directory as the variables folder.
model2.onnx (10.4 MB)
This model2.onnx is the exported version of model.onnx.
check_model.py (110 Bytes)
Thats the model, it didn’t output anything for both models.
And I will show u the log of the command: trtexec --onnx=model.onnx --verbose
CommandError.txt (9.7 KB)
The log output for the second model: trtexec --onnx=model2.onnx --verbose
2nderror.txt (9.7 KB)

I did an adjustment following this link: Error converting onnx model to a tensorrt: Unsupported ONNX data type: UINT8 (2) · Issue #1022 · NVIDIA/TensorRT (github.com)
So now I have new model
updated_model.onnx (10.4 MB)
but comes new error while executing: trtexec --onnx=updated_model.onnx --verbose

[05/05/2022-02:49:19] [E] [TRT] ModelImporter.cpp:776: --- End node ---
[05/05/2022-02:49:19] [E] [TRT] ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:4519 In function importTopK:
[8] Assertion failed: (inputs.at(1).is_weights()) && "This version of TensorRT only supports input K as an initializer."
[05/05/2022-02:49:19] [E] Failed to parse onnx file
[05/05/2022-02:49:19] [I] Finish parsing network model
[05/05/2022-02:49:19] [E] Parsing model failed
[05/05/2022-02:49:19] [E] Failed to create engine from model.
[05/05/2022-02:49:19] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8204] # trtexec --onnx=updated_model.onnx --verbose

Hi,

Please post your concern on Issues · onnx/onnx-tensorrt · GitHub to get better help.

Thank you.