Torch slowness on Jetson Nano

Hello everyone, I’ve been trying for months to make a repository of my interest work in jetson nano.

At first it seemed like a problem with torch not detecting cuda so I managed to install a version that satisfies that.

However, after many tests, I have a rather bad performance of 1 frame per 7 seconds.

The repository in question is:

It is a semantic segmentation system for autonomous driving in maritime environments, it takes a sequence of 5 images and compares them to detect reflections in the water and not have false positives.

The package requirements are:


But to make it work with CUDA I managed to install these versions on jetson nano

  • torch @ file:///home/jettson/install-torch/torch-1.8.0-cp36-cp36m-linux_aarch64.whl

The albumations package depends on scikit-learn==0.19.1 and it has caused me quite a lot of problems because of opencv versions.

I have tried to do an optimization from torch to tensorRT but from what I see I can’t do it on jetson nano and I didn’t find a step by step tutorial to solve this either.

The weights of the model that I try to run is over 400MB so it may be a problem to run it.

I have created a google colab notebook to execute the model step by step in case you want to see it, in it you will find how to download the weights of the model and where to locate it as well as the exact command to execute for its operation. Those versions that are seen in colab I cannot reproduce in jetson nano and that have CUDA support.

I’ve only been working with Jetson Nano for about 3 months and I don’t handle many concepts and alternatives. What do you recommend me to do?

Regards Irvin


Do you have a performance report about the model on a desktop GPU?
Since Nano has relatively limited resources, it is expected to be slower compared to the dGPU system.


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.