I recently purchased the Jetson Nano and this is my first real use of a GPU system. I am trying to train a VGG16 image recognition system and getting an error about insufficient memory on the board. From googling this, it does appear that the Nano has very limited memory and (maybe?) is just meant for inference and not for training. Does this align well with how everyone here uses it? Should I be training on my laptop (CPU) or AWS instance, and just use the Nano for inference?
Alternatively, what are the best ways (links to tutorials?) on how to correctly optimize code to made use of the limited on-board memory and maybe do frequent transfers from the SD card to the chip BRAM. Would the memory xfer rate be so slow that doing this would generate more overhead than just using a CPU?