PyTorch Torchvision models give NaN output

Hello,

The models provided in the Torchvision library of PyTorch give NaN output when performing inference with CUDA on the Jetson Nano (Jetpack 4.2). Code below to reproduce:

import torch
import torchvision
from torchvision.models import resnet18

net = resnet18(pretrained=True).eval().cuda()
input = torch.ones([1, 3, 48, 48]).cuda()
with torch.no_grad():
        output = net(input)
        print(output)

I’ve installed PyTorch and Torchvision using instructions found here: https://devtalk.nvidia.com/default/topic/1049071/jetson-nano/pytorch-for-jetson-nano/

Running the same code on the CPU gives a proper output.

Any help would be greatly appreciated!

Hi ristoojala, I believe the torchvision models were trained on image size of 224x224, except for Inception-v3 which was trained on 299x299. Does the model work if you use:

input = torch.ones([1, 3, 224, 224]).cuda()

Also the input data has normalization with mean subtraction and standard deviation applied, see here: https://discuss.pytorch.org/t/whats-the-range-of-the-input-value-desired-to-use-pretrained-resnet152-and-vgg19/1683

If you can’t get valid result, you might want to try loading an actual image with this same normalization applied. I’m able to use resnet18 model ok.

224x224 input does the trick. Thanks a bunch, I really appreciate your work here on the forums.

Weird that I am able to use a 48x48 input on my desktop.