How to port a pytorch model to jetson nano ?

I have a faster-rcnn.pytorch model. The repository address for this project is: https://github.com/jwyang/faster-rcnn.pytorch. I want to port this model to jetson nano. Is there a tutorial for reference? .

The jetson I bought has already installed python 3.6 and JetPack 4.2, so I didn’t build any wheels according to “Build Instructions”.
I downloaded the swap partition just by following “Build Instructions”.
I downloaded the “torch-1.1.0a0+b457266-cp36-cp36m-linux_aarch64.whl” offline, and then used “pip3 install .whl” to install pytorch1.1. I also used the following command to the torchvision:
$ git clone https://github.com/pytorch/vision

cd vision sudo python setup.py install

But when I install scipy with “pip3 install scipy” it always fails.

Any help will be greatly appreciated.

Hi,

The simplest way is to install pytorch on the Jetson Nano and run it directly:
https://devtalk.nvidia.com/default/topic/1049071/pytorch-for-jetson-nano/

Another approach is to convert the model into TensorRT.
It will increase the inference speed and also save the memory.
Here is a tutorial for doing so: https://docs.nvidia.com/deeplearning/sdk/tensorrt-sample-support-guide/index.html#network_api_pytorch_mnist

Thanks.

1 Like

The provided sample: https://docs.nvidia.com/deeplearning/sdk/tensorrt-sample-support-guide/index.html#network_api_pytorch_mnist

does not use GPU for training ! (my Platform is Jetson Nano)

Could you please provide a fix or solution for that, as Training on CPUs is really time consuming !

Here is a tutorial that uses PyTorch GPU for training: https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-transfer-learning.md

If you prefer to modify the existing example above to use GPU for training, you need to call .cuda() on the PyTorch model and tensors.

Thanks ! It was resolved.

Training time dropped drastically and GPU utilization began.