Pytorch to Onnx

I used this web(jetson-inference/pytorch-collect.md at master · dusty-nv/jetson-inference · GitHub)to convert my pytorch model to googlenet.onnx. It isn’t sucessful. I installed
torch (1.1.0) , torchvision(0.3.0) and tensorrt (5.0.6.3). When I used imagenet.py to load model,it had a problem below this picture.

How can I solve it problem?

I see some warning messages about GPU and Onnx versions. AI would start with resolving these warnings and see if the error still occurs.

Hi,

The default branch is for our latest JetPack, which uses TensorRT 7.1.3.
For TensorRT 5.0, please check out the L4T-R28.2 branch.

You can also upgrade your Nano with our latest JetPack4.4.1.
Here is the new feature of JetPack4.4.1 for your reference:

Thanks.

I recommend using the resnet18 or resnet50 networks, those should work and will be more accurate than googlenet. ResNet is what I test it with.

As @AastaLLL suggested, you could also upgrade to the latest JetPack and try using the jetson-inference container, which comes with PyTorch/torchvision pre-installed: