How can I train a custom model on a server then use the model on jetson nano

I am very new to the world of deep learning. I have recently bought a jetson nano to train custom models. I have come to the conclusion that this takes way too long. I would like to use my computer to speed things up. What is the best way to go about doing this?

Hi @justin60, on your PC you can install the framework used to train your models (for example PyTorch, TensorFlow, ect). For a PC with a GPU, I recommend Ubuntu and the framework’s corresponding NGC container.

Hi thanks for the reply! Is there anywhere I can find a walk-through or any sort of structure to follow? Again, I am very new to this whole scene. Thanks

Hi @justin60 , you can train a custom model on your PC which has access to GPU.
Please refer this GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.. this will help you to gain understanding on how to run your model on Jetson Nano

Hi, thanks for the reply! I got something kind of working in reference to train my computer, I’m gonna try and transfer the trained model to the jetson soon. I will drop any confusion or progress here.