AttributeError: 'NoneType' object has no attribute 'execute_v2'

Description

While Exporting my YOLOV7’s ONNX model to TensorRT, I am getting the below error-

AttributeError: 'NoneType' object has no attribute 'execute_v2'

At this line context.execute_v2(list(binding_addrs.values()))

Environment

TensorRT Version: 8.4.3.1
GPU Type: Tesla V100
Nvidia Driver Version: 460.32.0
CUDA Version: 11.2
Operating System + Version: Google Colab
Python Version (if applicable): 3.7.13
PyTorch Version (if applicable): 1.12.1+cu113

Relevant Files

files

Hi,
Please refer to the below link for Sample guide.

Refer to the installation steps from the link if in case you are missing on anything

However suggested approach is to use TRT NGC containers to avoid any system dependency related issues.

In order to run python sample, make sure TRT python packages are installed while using NGC container.
/opt/tensorrt/python/python_setup.sh

In case, if you are trying to run custom model, please share your model and script with us, so that we can assist you better.
Thanks!

I have already shared the model and inference script I am using. Please check once.

Have you checked the shared model and inference script?

Hi,

Please provide access to the repo. We will verify.
Also please make sure, you’re using the TensorRT model built on the same platform you’re running the inference.
TensorRT doesn’t support portability across the platforms

Thank you.

I have used the same colab notebook to train the custom YOLOv7 model and the I used same to convert the trained YOLOv7 weight to ONNX and it is working fine. The only issue I faced with TensorRT conversion… Please find below my notebook link-
notebook

Are you able to reproduce the issue? The notebook link is Google Colab

And onnx file link is https://drive.google.com/file/d/1-6ncrm71XD5RLDY6h39RiQMN3VcyVh-9/view?usp=sharing

@spolisetty have you got time to see this issue?

Hi,

Sorry for the delay in the update.
I couldn’t reproduce the issue you’ve mentioned, I could successfully run the inference on generated TensorRT engine using the script you’ve shared(Btw, I used latest TensorRT version 8.4.3). Could you please point me to the right issue repro steps.

Thank you.

I am also able to run the code successfully so no need to check on that issue.

I‘m getting the same error, how did you solve this problem?