YOLOv8 on Jetson Nano with Cuda Support

I am trying to run YOLOv8 on a Jetson Nano with Cuda acceleration.
And I have to switch from python3.6 to python3.8.
And this means I can not use my prebuilt torch library that comes with Cuda.
I was wondering has anyone made an effort to build PyTorch on jetson nano with python3.8?

Hi,

You can find the instructions for building the PyTorch library in the below topic:

> Instructions
>> Build from Source

Thanks.

Thanks for the reply.
Actually, I am following the “Build from Source” instructions to build torch 1.10.1 on my Jetson Nano. I have also increased my Swap area to 4GB, and work in a virtualenv with python 3.8 (to be used with YOLOv8).
But now, the error I face is that while “Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/autograd/generated/VariableType_1.cpp.o”
it shows a “c++: internal compiler error: Killed (program cc1plus)” and it does not progress anymore.

Any thoughts on this would be really appreciated.
Maybe trying other versions of pytorch?
Can it be a memory problem? Or something else?

Hi,

Yes, killed is usually related to the memory issue.
Could you increase the swap memory size and try it again?

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.