Nano PyTorch 1.2.0 wheel

Hi there,

I’m wondering if anyone has built PyTorch 1.2.0 for the Jetson Nano. Following here (https://devtalk.nvidia.com/default/topic/1049071/jetson-nano/pytorch-for-jetson-nano/), I was able to install 1.1.0, but have quickly run into issues described here (https://devtalk.nvidia.com/default/topic/1048889/jetson-nano/pytorch-support/2).

The exact error was as follows:

RuntimeError: cuda runtime error (7) : too many resources requested for launch at /media/nvidia/WD_BLUE_2.5_1TB/pytorch-v1.1.0/aten/src/THCUNN/generic/SpatialUpSamplingBilinear.cu:67

It seems that there’s a fix on GitHub (RuntimeError: cuda runtime error (7) : too many resources requested for launch at · Issue #8103 · pytorch/pytorch · GitHub) that requires modifying and recompiling torch from (1.1.0) source. However, a comment on the second devtalk post from above suggested that simply installing torch from (github/master) source without modifications fixes the issue, and I’m wondering if this is patched in 1.2.0.

If no one has a built wheel for 1.2.0, I’ll do so and check if these issues are fixed in that release; maybe others would be able to use this.

Hi, here are links to wheels for PyTorch v1.1.0 with the patch applied:

Python 2.7: [url]https://nvidia.box.com/s/n9p7u0tem1hqe0kyhjspzz78xpka7f5e[/url]

Python 3.6: [url]https://nvidia.box.com/s/n9p7u0tem1hqe0kyhjspzz78xpka7f5e[/url]

I don’t think PyTorch v1.2.0 has this patch, because the patch changes the CUDA block dims at compile time (so applying that would change it for all GPUs). I will try building v1.2.0 tomorrow and applying the patch.

Thanks Dusty! I had just commented on the GitHub issue where you posted the original fix. The wheels posted on that thread did not work for me, but I’ll try this for 3.6 tomorrow.

I actually started to build 1.2.0 on the Jetson Nano, which should (fingers crossed that no errors occurred) be done tomorrow morning; I could let you know whether the vanilla 1.2.0 is a fix if it’d be helpful.

Hi Gerard, those wheels from the GitHub issue are the same ones that I posted here. They work on my Nano though.

Sure, that would be great - let me know. I think 1.2.0 might fix just that particular function, so I will still probably apply my patch which aims to address a wider group of functions (but does not fix all, as noted in the GitHub issue), and was not included in 1.2.0.

OK, the new PyTorch v1.2.0 wheels built and are posted here. They have the resources patch from that issue applied.

I also found out what went wrong on external systems with the previous patched v1.1.0 wheel for Python 3.6, so I am re-building that too.

Sorry I didn’t get back to you over v1.2.0 - the build failed due to running out of drive space. Note to other Nano users, get over a 16G micro-SD :).

PyTorch v1.2.0 works for me, with the resource patches fixing all bugs! Thank you so much for providing these wheels.

print(torch.version)
1.2.0a0+8554416

On Jetson Nano: After updating to pytorch v1.2.0 and setting the device:

device = torch.device(“cuda:0”) get the runtime error:

RuntimeError: cuda runtime error (59) : device-side assert triggered at /media/nvidia/WD_BLUE_2.5_1TB/pytorch/20190820/pytorch-v1.2.0/aten/src/THC/generic/THCTensorMath.cu:26

Additional:
What is the path: /media/nvidia/WD_BLUE_2.5_1TB/pytorch/20190820 ?

Hmm, I am able to run that code without issue on Nano, same PyTorch v1.2.0 wheel. Did you try fully uninstalling previous PyTorch version before installing the new wheel? “pip uninstall torch”. Also, are you on JetPack 4.2.1?

That is the path on my system from where the PyTorch code was built (different system from my Nano that I tested the above on). It doesn’t have an effect at runtime, except when you see code debugging messages that include the full path of the original code. If you want to look at that code referenced by the error, check the PyTorch v1.2.0 branch on GitHub.