Assertion failed: *tensor = importer_ctx->network()->addInput( input.name().c_str(), trt_dtype, trt_dims)

Environment

TensorRT Version : 6.0.0.1
GPU Type : Nvidia A40
Nvidia Driver Version : 510.47.03
CUDA Version : 10.1
CUDNN Version : 8.2.2
Operating System + Version : ubuntu 18.04

Python Version (if applicable) : 3.6.8
TensorFlow Version (if applicable) :2.5.1
PyTorch Version (if applicable) :
Baremetal or Container (if container which image + tag) :nvcr.io/nvidia/tensorflow 19.10-py3

trtexec --onnx=albert_best_model_cluener_efficientglobalpointer_tf32.onnx --saveEngine=albert_best_model_cluener_efficientglobalpointer_tf32.trt --workspace=6000 --verbose=True --fp16
albert_best_model_cluener_efficientglobalpointer_tf32.onnx (38.9 MB)

TensorRT Version : 8.0.0.1
GPU Type : Nvidia A40
Nvidia Driver Version : 510.47.03
CUDA Version : 10.1
CUDNN Version : 8.2.2
Operating System + Version : ubuntu 18.04

Hi,
Please refer to below links related custom plugin implementation and sample:

While IPluginV2 and IPluginV2Ext interfaces are still supported for backward compatibility with TensorRT 5.1 and 6.0.x respectively, however, we recommend that you write new plugins or refactor existing ones to target the IPluginV2DynamicExt or IPluginV2IOExt interfaces instead.

Thanks!

I wonder if this is due to two inputs or dynamic dimensions?Thanks

Hi

Looks like you’re using an old version of the TensorRT. We recommend you to please use the latest TensorRT version.

Also, please use Opset 13 version.

If you still face this issue, we recommend you to try onnx-simplifier and convert the model using trtexec.

Thank you.


Hi, when TensorRT Version is 8.0.0.1, I still get other error,I don’t know how to solved it? Thank you!

Hi,

We are unable to reproduce the same issue. We recommend you to use the latest TensoRT version 8.4 EA.

[04/06/2022-04:05:14] [E] [TRT] ModelImporter.cpp:749: — Begin node —
[04/06/2022-04:05:14] [E] [TRT] ModelImporter.cpp:750: input: “Reshape__3501:0”
input: “Slice__3497:0”
output: “Mod__3503:0”
name: “Mod__3503”
op_type: “Mod”
domain: “”

[04/06/2022-04:05:14] [E] [TRT] ModelImporter.cpp:751: — End node —
[04/06/2022-04:05:14] [E] [TRT] ModelImporter.cpp:754: ERROR: builtin_op_importers.cpp:4951 In function importFallbackPluginImporter:
[8] Assertion failed: creator && “Plugin not found, are the plugin name, version, and namespace correct?”
[04/06/2022-04:05:14] [E] Failed to parse onnx file
[04/06/2022-04:05:14] [I] Finish parsing network model
[04/06/2022-04:05:14] [E] Parsing model failed
[04/06/2022-04:05:14] [E] Failed to create engine from model.
[04/06/2022-04:05:14] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8400] # /usr/src/tensorrt/bin/trtexec --onnx=albert_best_model_cluener_efficientglobalpointer_tf32.onnx --verbose

Looks like you’re using an unsupported operator Mod. We recommend you please implement a custom plugin for it. Please refer to the below links related to custom plugin implementation and sample:

https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleOnnxMnistCoordConvAC

Thank you.

Hi, I have below error, I dont know how to solved. Can you help me?Thank you!


output5.onnx (39.1 MB)
TensoRT version 8.4 EA

Hi,

Could you please share with us complete verbose logs using trtexec --verbose option.
Thank you.

Hi, too many output,Do you want to see all output?

Yes, Could you please dump it into a file like following. You can modify the command accordingly you’re using.

trtexec --onnx=output5.onnx --verbose --workspace=5000 &> logs.txt

logs.txt (2.7 MB)
Thanks

Thank you for sharing the logs and reporting this issue. Our team will work on this.
Please allow us some time.

OK,Thank you!

Do you have any idea?Thanks!