TRT 8 error - Error Code 2: Internal Error (Assertion asCopyingLeafNode() ? candidateChoices.empty() : !candidateChoices.empty() failed.)

Description

I have an Onnx model that was generated from PyTorch.
After parser successfully completion, while the TRT engine start its optimization the following runtime error is raised:

2: [optimizer.cpp::isPolymorphic::1018] Error Code 2: Internal Error (Assertion asCopyingLeafNode() ? candidateChoices.empty() : !candidateChoices.empty() failed.)

Environment

TensorRT Version: 8.0.1, c++ version
GPU Type: GeForce GTX 1080
Nvidia Driver Version: 460.32.03
CUDA Version: 11.2
CUDNN Version: 8.2.1
Operating System + Version: Linux Ubuntu 18.04
Python Version (if applicable): NA
TensorFlow Version (if applicable): NA
PyTorch Version (if applicable): 1.9
Baremetal or Container (if container which image + tag): Baremetal
ONNX IR version: 0.0.6
Opset version: 13

Steps To Reproduce

I couldn’t load the model, each time I tried I got an error.
Is there a size limitation of loadable files?
My model size is ~120MB.

You only need to load the onnx file into the TRT and set the following:

  • Input name - input.1

  • Output name - 1651

And activate the parser and buildEngineWithConfig.

Attached is all TRT runtime reports during the running:
LogFile0_Error.txt (707.7 KB)

I saw one strange report:

Setting dynamic range is only allowed when there are no Q/DQ layers in the Network

Please clarify it.
Is there any relation to the topic error?

Regard,

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html#onnx-export

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Hello,
Thanks for your quick response.
We will share our onnx model ASAP, we are editing it in order to minimize its size before and still be able to reproduce the problem.

In the mean time i checked the model with the check_model.py and no error was reported which means that the check passed OK.

Also, i used the trtexec as you asked and the same error was raised:
trtexecVerbose.txt (383.5 KB)

Thanks,

@orong13,

Could you please check and correct if there is same problem in your model.

Also for your reference, Questions about int8 inference procedure · Issue #1271 · NVIDIA/TensorRT · GitHub

Please share with us ONNX model if you still face this issue.

Thank you.

Hello,
Attached is our onnx model
trt8_trial_aciq_quant_symmetric_random_weights_reduced_backbone_min.onnx (18.7 MB)

Additionally, we didn’t found that the references helped us, we couldn’t find that the problems are the same.

Thanks for your help,

@orong13,

We could reproduce the same error. Please allow us some time to get back on this.