Training jetson inference mobilenet on an ARM AWS instance?

Hello - has anyone had success configuring an Nvidia ARM AWS instance to train MobileNet using the jetson inference tools?

Hi @liellplane, I haven’t used AWS, but I have run the train.py and train_ssd.py from jetson-inference on an x86+GPU Linux system. You can just run the training on your PC/server, export it to ONNX, and then copy over the exported model to your Jetson and run it.

You can directly clone these repos to your system as opposed to the entire jetson-inference project (these are submodules):

On your PC/server instance, you’ll want CUDA/cuDNN, PyTorch, and torchvision installed (along with some other prerequisites from the requirements.txt) I find it most convenient to just run the NGC PyTorch container on my laptop which already has that stuff installed.

@dusty_nv great answer, much appreciated - wish I had asked sooner!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.