Trt model usage code in python

Hi,

I had a .pt model from custom yolov5. i had converted it to .onnx later after some days. now i tried to convert it to .trt model using this command:

trtexec --onnx=/home/nvidia/PycharmProjects/jetson-linuxsphu/nhb.onnx --saveEngine=/home/nvidia/PycharmProjects/jetson-linuxsphu/nhb.trt --fp16

the conversion was succesfull. im sharing the log file.
trt_convert.txt (31.8 KB)

But i dont know how to use it in python code for inferencing. Can someone help me with a sample code?

MY DEVICE SPECS:

Thanks

Hi,

Please find a related sample below:

Thanks.

Hi,

I tried it. but getting this error:
/home/nvidia/PycharmProjects/jetson-linuxsphu/venv/bin/python /home/nvidia/PycharmProjects/jetson-linuxsphu/pspp.py
Traceback (most recent call last):
File “/home/nvidia/PycharmProjects/jetson-linuxsphu/pspp.py”, line 12, in
import common
File “/home/nvidia/PycharmProjects/jetson-linuxsphu/common.py”, line 25, in
from cuda import cuda, cudart
ModuleNotFoundError: No module named ‘cuda’

Process finished with exit code 1

In the onnx_to_tensorrt.py file, i had specified my .trt file and an image file for inference.

Attaching img file where the error is pointing to.


The above img file is showing the code part from common.py from same github repo

Hi,

Could you try if you can install it with the below command:

$ pip3 install pycuda --user

Thanks.

nvidia@tegra-ubuntu:~$ pip3 install pycuda --user
Requirement already satisfied: pycuda in ./.local/lib/python3.8/site-packages (2024.1)
Requirement already satisfied: pytools>=2011.2 in ./.local/lib/python3.8/site-packages (from pycuda) (2023.1.1)
Requirement already satisfied: appdirs>=1.4.0 in ./.local/lib/python3.8/site-packages (from pycuda) (1.4.4)
Requirement already satisfied: mako in /usr/lib/python3/dist-packages (from pycuda) (1.1.0)
Requirement already satisfied: platformdirs>=2.2.0 in ./.local/lib/python3.8/site-packages (from pytools>=2011.2->pycuda) (4.1.0)
Requirement already satisfied: typing-extensions>=4.0 in ./.local/lib/python3.8/site-packages (from pytools>=2011.2->pycuda) (4.12.2)

[notice] A new release of pip is available: 23.3.2 → 24.3.1
[notice] To update, run: python3 -m pip install --upgrade pip

Hi,

Sorry, it should be cuda-python.

$ sudo apt install python3-pip
$ pip3 install cuda-python
$ python3
Python 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from cuda import cuda, cudart
>>>

Thanks.

1 Like

Hi,
I used the same commands and the cuda-python was installed successfully. but when i run the code now it throws this error:

*/home/nvidia/PycharmProjects/jetson-linuxsphu/venv/bin/python /home/nvidia/PycharmProjects/jetson-linuxsphu/pspp.py *
Reading engine from file nhb.trt
Running inference on image 4.jpg…
Traceback (most recent call last):

  • File “/home/nvidia/PycharmProjects/jetson-linuxsphu/pspp.py”, line 160, in *
  • main()*
  • File “/home/nvidia/PycharmProjects/jetson-linuxsphu/pspp.py”, line 125, in main*
  • trt_outputs = [output.reshape(shape) for output, shape in zip(trt_outputs, output_shapes)]*
  • File “/home/nvidia/PycharmProjects/jetson-linuxsphu/pspp.py”, line 125, in *
  • trt_outputs = [output.reshape(shape) for output, shape in zip(trt_outputs, output_shapes)]*
    ValueError: cannot reshape array of size 176400 into shape (1,255,19,19)

Process finished with exit code 1

I think this is because of model mismatch. The one used in the example was yolov3, but i have a custom yolov5 with 2 classes trained to detect. Is there any way to solve this? Please help.

Thanks

Hi,

The sample is open-sourced.
You can modify the output based on your model accordingly.

Thanks.