I want to retrain a detection model on my laptop and then transfer that model and the labels, etc to the jetson nano. I’m following the instructions at https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-ssd.md. I can successfully follow these instructions on my nano, but it takes a really long time, its been 4 hours to get to epoch 2 out of 30. Can I do the training on my windows laptop in some way? Possibly using WSL2?
Thank you for your time!
Hi @crose72, I have definitely run train.py/train_ssd.py on an Ubuntu PC (that has an NVIDIA GPU in it). Typically I just start the NGC PyTorch container (for x86) and mount in the PyTorch scripts and run them in the container. pytorch-ssd is a submodule of jetson-inference, and you can find the standalone repo here:
In theory, yes, however I’m unsure of all the steps needed. In WSL2, you would need to install Ubuntu, NVIDIA Drivers, NVIDIA Container Runtime, ect. Then run the NGC PyTorch container that way.
If your PC doesn’t have a GPU, I believe train_ssd.py assumes there is one available, so that may need additional changes and would be slow in CPU-only mode anyways.
My WSL2 uses Ubuntu, but how would I get the NVIDIA drivers and NVIDIA container runtime, and the etc.? Also, I’m actually using the jetson-inference library without docker containers, only because we ran into some difficulty in being able to build those containers on WSL2 (because of NVIDIA runtime incompatibility).
@crose72 here are my notes on installing CUDA/Docker/ect in WSL2 Ubuntu (it has probably been a year since I last did this)
(reboot if needed)
(wsl --shutdown if needed)
wsl or ubuntu (to start ubuntu)
$ sudo apt-get install docker.io
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
$ curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
$ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
$ sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
-open a new WSL2 terminal-
sudo docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi
@dusty_nv thank you for the notes! I’m gonna see if I can get these to work.