kchen
November 11, 2020, 5:19am
1
I am walking through the Jetson Nano AI course, and was using the nvdli-nano to run the CNN on Jetson Nano. I went through the code lines in the jupyter notebook, and don’t find a line that specify the training to be performed in GPU. I wonder if that is inferred somewhere, or set by default? If I have both a CPU and GPU, how should I allocate the computational power of each to perform the task?
Hi,
Please noted that Jetson is designed mainly for inference.
For training on Jetson, you can check if this page can meet your requirement:
<img src="https://github.com/dusty-nv/jetson-inference/raw/master/docs/images/deep-vision-header.jpg" width="100%">
<p align="right"><sup><a href="depthnet.md">Back</a> | <a href="pytorch-cat-dog.md">Next</a> | </sup><a href="../README.md#hello-ai-world"><sup>Contents</sup></a>
<br/>
<sup>Transfer Learning</sup></s></p>
# Transfer Learning with PyTorch
Transfer learning is a technique for re-training a DNN model on a new dataset, which takes less time than training a network from scratch. With transfer learning, the weights of a pre-trained model are fine-tuned to classify a customized dataset. In these examples, we'll be using the <a href="https://arxiv.org/abs/1512.03385">ResNet-18</a> and [SSD-Mobilenet](pytorch-ssd.md) networks, although you can experiment with other networks too.
<p align="center"><a href="https://arxiv.org/abs/1512.03385"><img src="https://github.com/dusty-nv/jetson-inference/raw/python/docs/images/pytorch-resnet-18.png" width="600"></a></p>
Although training is typically performed on a PC, server, or cloud instance with discrete GPU(s) due to the often large datasets used and the associated computational demands, by using transfer learning we're able to re-train various networks onboard Jetson to get started with training and deploying our own DNN models.
<a href=https://pytorch.org/>PyTorch</a> is the machine learning framework that we'll be using, and example datasets along with training scripts are provided to use below, in addition to a camera-based tool for collecting and labeling your own training datasets.
## Installing PyTorch
If you are [Running the Docker Container](aux-docker.md) or optionally chose to install PyTorch back when you [Built the Project](building-repo-2.md#installing-pytorch), it should already be installed on your Jetson to use. Otherwise, if you aren't using the container and want to proceed with transfer learning, you can install it now:
``` bash
This file has been truncated. show original
To check if a framework is running on GPU, you can use the API like this:
import torch
print(torch.cuda.is_available())
Thanks.
kchen:
I am walking through the Jetson Nano AI course, and was using the nvdli-nano to run the CNN on Jetson Nano. I went through the code lines in the jupyter notebook, and don’t find a line that specify the training to be performed in GPU.
In the nvdli-nano notebooks, if you look at where the model is initially created, there are these lines of code:
device = torch.device('cuda')
# model is created...
model = model.to(device)
This tells PyTorch to run the model on the CUDA device, and hence both training and inference will be done using the GPU.