Could you please give us more details.
TensorRT version, GPU, Platform, CUDA version, Driver version.
Please try the following and share with us complete logs and if possible minimum issue repro ONNX model and scripts.
mainboard.log.INFO.20221017-160406.3087681 (1.9 MB)
Here is the full log. The CUDA environment is installed through Jetpack 5.0. There is no repo onnx model available since the network is constructed using tensorrt c++ API and pytorch pt model.
As for the reference, the issue posted in github happens during inference phase and the building phase was successfil, but my error occured during the building phase. So I don’t think checking enqueue function can solve my issue.
Let me know if you have any suggestion. Many thanks.