[pluginV2Runner.cpp::execute::265] Error Code 2: Internal Error (Assertion status == kSTATUS_SCUESS failed. )

I upgraded my tensorRT 7 plugin c++ code so it’s compatible to tensorRT 8, and I keep getting this error while building engine.

I’m hoping someone can tell me more about this error msg, or point me to documents that can explain it. Does it mean that the building process failed due while processing a pluginV2?

Any suggestion is highly appreciated. Thanks in advance.


Could you please give us more details.
TensorRT version, GPU, Platform, CUDA version, Driver version.
Please try the following and share with us complete logs and if possible minimum issue repro ONNX model and scripts.

Thank you.

mainboard.log.INFO.20221017-160406.3087681 (1.9 MB)
Here is the full log. The CUDA environment is installed through Jetpack 5.0. There is no repo onnx model available since the network is constructed using tensorrt c++ API and pytorch pt model.

As for the reference, the issue posted in github happens during inference phase and the building phase was successfil, but my error occured during the building phase. So I don’t think checking enqueue function can solve my issue.

Let me know if you have any suggestion. Many thanks.

GPU: Orin
CUDA: 11.4.166


Based on the logs, the error timestamp looks different than other log messages.

I1017 16:05:05.494969 3087681 rt_net.cc:37] =============== Computing costs for 
I1017 16:05:05.494993 3087681 rt_net.cc:37] *************** Autotuning format combination: Float(414720,11520,8,1) -> Float(414720,11520,8,1) ***************
I1017 16:05:05.495082 3087681 rt_net.cc:37] --------------- Timing Runner: cls_pred_prob (PluginV2)
I1017 16:05:05.546500 3087681 rt_net.cc:37] Deleting timing cache: 849 entries, served 641 hits since creation.
I1017 13:59:37.946795 1477744 rt_net.cc:37] [mainboard]2: [pluginV2Runner.cpp::execute::265] Error Code 2: Internal Error (Assertion status == kSTATUS_SUCCESS failed. )

Could you please share with us the minimal issue repro script/model/steps to try from our end for better debugging.

Thank you.