I wrote some Python code that runs a modified version of the ‘speed_estimation.py’ routine in the ‘Solutions’ folder of YOLOv8.
I’ve been running this on a Windows machine, an i5 with ‘Integrated Intel HD Graphics’. The code runs fine - but slowly. So I bought a ReComputer J1020, hoping that the GPU cores would give an improvement. (The J1020 is essentially a packaged Jetson Nano. Jetpack is 4.6.4-b39, Python is 3.6.9).
It is all very confusing. Why would I need DeepStream? And it seems that Python 3.6.9 can’t use the pyproject.toml file to install YOLOv8.
Put simply, what I want to do is run my code using YOLOv8 on the J1020 but using a TensorRT model (an .engine model exported from the .pt model) to get the speed from the GPUs. I believe this will leverage the GPU cores.
Can anybody explain (simply) how I can do this successfully?
As the J1020 carrier board has an M2 slot I can alternate between booting the original installation from the built-in SD, or any ‘experiment’ from a much larger SSD. Luxury!
I’ll update this thread when I make progress. Thanks again!
Since Ultralytics needs at least Python 3.8 to work, it’s not straightforward to install on the Jetson Nano whose default Python is 3.6.
If you get it to work, it will be good if you can share it with us since that will help other Nano users as well.