While Exporting my YOLOV7’s ONNX model to TensorRT, I am getting the below error-
AttributeError: 'NoneType' object has no attribute 'execute_v2'
At this line context.execute_v2(list(binding_addrs.values()))
TensorRT Version: 126.96.36.199 GPU Type: Tesla V100 Nvidia Driver Version: 460.32.0 CUDA Version: 11.2 Operating System + Version: Google Colab Python Version (if applicable): 3.7.13 PyTorch Version (if applicable): 1.12.1+cu113
Please provide access to the repo. We will verify.
Also please make sure, you’re using the TensorRT model built on the same platform you’re running the inference.
TensorRT doesn’t support portability across the platforms
I have used the same colab notebook to train the custom YOLOv7 model and the I used same to convert the trained YOLOv7 weight to ONNX and it is working fine. The only issue I faced with TensorRT conversion… Please find below my notebook link- notebook
Sorry for the delay in the update.
I couldn’t reproduce the issue you’ve mentioned, I could successfully run the inference on generated TensorRT engine using the script you’ve shared(Btw, I used latest TensorRT version 8.4.3). Could you please point me to the right issue repro steps.