Hello,
I’m experiencing some issues while working with YOLOv9 and TensorRT on my Jetson Orin Nano and would greatly appreciate any help or guidance.
TensorRT Version*: 8.6.2
GPU Type**: Persistence-M
Nvidia Driver Version: 540.3.0
CUDA Version: 12.2
CUDNN Version: 8.9.4.25
Operating System + Version: Ubuntu 22.04.4 LTS
Python Version (if applicable): 3.10.12
PyTorch Version 2.2.0a0+81ea7a4
Issues Encountered:
While working with YOLOv9 and TensorRT, I’m encountering the following errors:
DeprecationWarnings:
/home/gokdeniz2004/yolov9-tensorrt/yolov9_trt.py:175: DeprecationWarning: Use get_tensor_shape instead.
/home/gokdeniz2004/yolov9-tensorrt/yolov9_trt.py:176: DeprecationWarning: Use get_tensor_shape instead.
/home/gokdeniz2004/yolov9-tensorrt/yolov9_trt.py:177: DeprecationWarning: Use get_tensor_dtype instead.
/home/gokdeniz2004/yolov9-tensorrt/yolov9_trt.py:185: DeprecationWarning: Use get_tensor_shape instead.
/home/gokdeniz2004/yolov9-tensorrt/yolov9_trt.py:186: DeprecationWarning: Use get_tensor_shape instead.
Solution Attempted: I’ve replaced the deprecated methods with theengine.get_binding_shape(binding) → engine.get_tensor_shape(binding)
engine.get_binding_dtype(binding) → engine.get_tensor_dtype(binding)
AttributeError: ‘Yolov9’ object has no attribute ‘output_dim’:
Error in do_infer: ‘Yolov9’ object has no attribute ‘output_dim’
TypeError: ‘NoneType’ object is not iterable
Traceback (most recent call last):
File “/home/gokdeniz2004/yolov9-tensorrt/yolov9_trt.py”, line 243, in
draw_detect_results(img, detect_results)
File “/home/gokdeniz2004/yolov9-tensorrt/python/draw_AI_results.py”, line 24, in draw_detect_results
for r in results:
TypeError: ‘NoneType’ object is not iterable
PyCUDA ERROR: The context stack was not empty upon module cleanup
PyCUDA ERROR: The context stack was not empty upon module cleanup.
A context was still active when the context stack was being cleaned up.
At this point in our execution, CUDA may already have been deinitialized,
so there is no way we can finish cleanly. The program will be aborted now.
Use Context.pop() to avoid this problem.
Thank you for your help