The Nano doesn’t have a lot of RAM, and also has a very weak GPU compared to desktop GPUs and higher-end devices like the Xavier, so you will probably not be successful doing training on the Nano.
You should do your training on Windows, if you have “exe” versions of those bits.
Once you have a trained network, you may perhaps be able to load that model and run inference on the Nano – the only way to find out whether that will work is to try it!
thanks and I mean that, someone told me that he tried the original yolo v3 inference model on jetson nano at 5fpg, so I tried but with no success. It said “ran out of memory” like the following pic. https://github.com/vxgu86/configSet/blob/master/imgs/nano-memory.jpg.
so I am trying yolo with tensorRT , my problem is if I run the original darknet model without tensorrt mentioned, would it be accelerated?
Thanks guys.
honestly speaking I ran into another problem with Jetson nano&& Jetpack 4.2.
it’s here, could you guys help solving it?