[5] Assertion failed: tensors.count(input_name)

Description

Hello, I’m trying to build a tensorrt engine from a onnx file through the command “trtexec”. I’ve obtained the onnx file from doing the Darknet Yolov4 conversion to ONNX. I ran into a problem I could not wrap my head around.

Error:
&&&& RUNNING TensorRT.trtexec # /home/tw34/Downloads/TensorRT-5.1.5.0/bin/trtexec --onnx=yolov4_1_3_416_416_static.onnx
[I] onnx: yolov4_1_3_416_416_static.onnx

Input filename: yolov4_1_3_416_416_static.onnx
ONNX IR version: 0.0.4
Opset version: 11
Producer name: pytorch
Producer version: 1.3
Domain:
Model version: 0
Doc string:

WARNING: ONNX model has a newer ir_version (0.0.4) than this parser was built against (0.0.3).
While parsing node number 0 [Conv]:
ERROR: ModelImporter.cpp:288 In function importModel:
[5] Assertion failed: tensors.count(input_name)
[E] failed to parse onnx file
[E] Engine could not be created
[E] Engine could not be created
&&&& FAILED TensorRT.trtexec # /home/tw34/Downloads/TensorRT-5.1.5.0/bin/trtexec --onnx=yolov4_1_3_416_416_static.onnx

Any helps or explanation are all highly appreciate, thanks!

Environment

TensorRT Version:
GPU Type: RTX 2070
Nvidia Driver Version: 450.66
CUDA Version: 10.1
CUDNN Version: 7.6.2
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3.6
TensorFlow Version (if applicable): 2.0.0.rc1
PyTorch Version (if applicable): 1.4.0
Baremetal or Container (if container which image + tag):

Relevant Files

I upload the onnx file to the google drive for anyone that would like to check it out

Steps To Reproduce

The command I use to get this error is “/usr/src/tensorrt/bin/trtexec --onnx=./yolov4_1_3_416_416_static.onnx”

Hi @tate16453,
Looks like you are using older version of TensorRT.
Request you to upgrade to the latest TRT version.
https://developer.nvidia.com/nvidia-tensorrt-7x-download#trt72

Thanks

Hello AakankshaS,
Thank you for your reply, I’ve tried upgrading the TensorRT version to 6.0.1.5 since I plan to use CUDA 10.1. I rerun the command and got another error instead.
WARNING: ONNX model has a newer ir_version (0.0.6) than this parser was built against (0.0.3).
While parsing node number 123 [Reshape]:
ERROR: builtin_op_importers.cpp:1616 In function importReshape:
[8] Assertion failed: get_shape_size(new_shape) == get_shape_size(tensor.getDimensions())
[09/14/2020-10:44:47] [E] Failed to parse onnx file
[09/14/2020-10:44:47] [E] Parsing model failed
[09/14/2020-10:44:47] [E] Engine could not be created

Does this mean that I need to use the TensorRT 7 and change my CUDA version?

Hi @tate16453,
Kindly check the compatibility wrt the version of TRT from here.
If you are using TRT 7.2, The release supports CUDA 10.2 for Jetson and 11.0 update 1 for x86 and PowerPC.

Thanks!

Hello @AakankshaS,

Thank you very much for your suggestion. I took a similar approach by using the latest version of the TensorRT from the NGC container. It works, thanks again!