TX2 GPU obsolete for Pytorch?

I’m trying to infer a pretrained Resnet model on TX2 on Pytorch 1.7 but it returns this error RuntimeError: no kernel image is available for execution on the device at src/convolution.cu:283 during the forward pass, I also tried to change the Pytorch version to 1.6 but the same issue recurs. I’ve installed the correct version of CUDA and I’m able to check the version my nvcc -V. Upon some research I found this [Cuda error: no kernel image is available for execution on the device · Issue #31285 · pytorch/pytorch · GitHub] which explains that the GPU is supported anymore on Pytorch. Then why do I find compiled wheels files that were released recently for TX2? Is my boards GPU obsolete?

It is unlikely the GPU is too out of date, but it is likely the software has to be recompiled for that CUDA architecture.

In the same way that a PC’s x86_64/amd64 CPU core and a Jetson’s ARM code cannot execute one their counterparts, code compiled for one GPU architecture must also be recompiled for the correct architecture of another GPU.

The Pascal series GPU (and TX2 is Pascal) is 6.2/sm_62 architecture. Somewhere you have some compiled code (probably Python is calling it…sorry, I’m not a Python guy), and it needs nvcc to rebuild it with the right architecture. Note that multiple architectures can be specified (it makes the binary larger, but it is rather powerful for portability). An nvcc option example is “-gencode arch=compute_62,code=sm_62”.

If you run “nvcc --help | less”, and then search for “gencode”, followed by “arch”, you’ll see it is an entire list of comma delimited options (this is how you could specify multiple architectures, but if it is going on a TX2, then I’d just set it for the TX2…if it is some sort of distributed setup, then you might use multiple arch’s).

Hi @Sathya, which PyTorch wheel did you install?

These ones are built for TX1/TX2/Nano/Xavier: https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-7-0-now-available

Yes, I used the same wheel files as you quoted.

I just realized that the model I was trying to infer or train had an unconventional data structure https://github.com/NVIDIA/MinkowskiEngine. On the other hand it was able train or infer conventional models. Any guidance towards how to run MinkownskiEngine on TX2?

Sorry, I’m not the right person for training/inference questions. :(


No kernel image is available indicates that you are not building the library with correct GPU architecture.
TX2 GPU architecture is 62, please add it into the Makefile before compiling.