I tried it. but getting this error:
/home/nvidia/PycharmProjects/jetson-linuxsphu/venv/bin/python /home/nvidia/PycharmProjects/jetson-linuxsphu/pspp.py
Traceback (most recent call last):
File “/home/nvidia/PycharmProjects/jetson-linuxsphu/pspp.py”, line 12, in
import common
File “/home/nvidia/PycharmProjects/jetson-linuxsphu/common.py”, line 25, in
from cuda import cuda, cudart
ModuleNotFoundError: No module named ‘cuda’
Process finished with exit code 1
In the onnx_to_tensorrt.py file, i had specified my .trt file and an image file for inference.
Attaching img file where the error is pointing to.
$ python3
Python 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from cuda import cuda, cudart
>>>
Hi,
I used the same commands and the cuda-python was installed successfully. but when i run the code now it throws this error:
*/home/nvidia/PycharmProjects/jetson-linuxsphu/venv/bin/python /home/nvidia/PycharmProjects/jetson-linuxsphu/pspp.py * Reading engine from file nhb.trt Running inference on image 4.jpg… Traceback (most recent call last):
File “/home/nvidia/PycharmProjects/jetson-linuxsphu/pspp.py”, line 160, in *
main()*
File “/home/nvidia/PycharmProjects/jetson-linuxsphu/pspp.py”, line 125, in main*
trt_outputs = [output.reshape(shape) for output, shape in zip(trt_outputs, output_shapes)]*
File “/home/nvidia/PycharmProjects/jetson-linuxsphu/pspp.py”, line 125, in *
trt_outputs = [output.reshape(shape) for output, shape in zip(trt_outputs, output_shapes)]* ValueError: cannot reshape array of size 176400 into shape (1,255,19,19)
Process finished with exit code 1
I think this is because of model mismatch. The one used in the example was yolov3, but i have a custom yolov5 with 2 classes trained to detect. Is there any way to solve this? Please help.