Jetson Nano - using old version pytorch model

Hello,

I am currently trying to use a pytorch model done with pytorch 1.0. I am doing a C++ code to load the model using torch script like this: module = torch::jit::load(net_fn);

First I tried to use JetPack 4.3 with pytorch 1.1 and I successfully used the model.

Now I am trying to use JetPack 4.5 with pytorch 1.8, I did the following sequence:

  • I open load my model with pytorch 1.2 and I saved it again, then with pytorch 1.3, then pytorch 1.5 until getting the model on pytorh 1.8:
    gcn2_640x480_conv18.pt (13.9 MB)

  • Then I tried to use this model with pytorch 1.8 on Jetson Nano but I got this error:

terminate called after throwing an instance of ‘std::runtime_error’
what(): The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
File “code/torch.py”, line 65, in forward
_14 = torch.to(_11, dtype=4, layout=0, device=torch.device(“cuda:0”), pin_memory=False, non_blocking=False, copy=False, memory_format=None)
_15 = torch.to(_13, dtype=4, layout=0, device=torch.device(“cuda:0”), pin_memory=False, non_blocking=False, copy=False, memory_format=None)
_16 = torch.unsqueeze(torch.index(det, [_14, _15]), 1)
~~~~~~~~~~~ <— HERE
pts = torch.cat([_6, _9, _16], 1)
_17 = torch.slice(pts, 0, 0, 9223372036854775807, 1)

Traceback of TorchScript, original code (most recent call last):
File “code/gcn2_640x480_conv11.py”, line 42, in forward
_12 = torch.to(_9, dtype=4, layout=0, device=torch.device(“cuda:0”), pin_memory=False, non_blocking=False, copy=False)
_13 = torch.to(_11, dtype=4, layout=0, device=torch.device(“cuda:0”), pin_memory=False, non_blocking=False, copy=False)
_14 = torch.unsqueeze(torch.index(det, [_12, _13]), 1)
~~~~~~~~~~~ <— HERE
pts = torch.cat([_4, _7, _14], 1)
_15 = torch.slice(pts, 0, 0, 9223372036854775807, 1)
RuntimeError: Tried to cast a List to a List<Tensor?>. Types mismatch.

./run.sh: line 24: 9346 Aborted (core dumped) ./Examples/Monocular/mono_webcam

Could you please give me some highlights to solve this problem?

Regards.

Hi,

Do you mean you can serialize and de-serialize the model with v1.2, v1.3 and v1.5.
But only get the error with v1.8?

If yes, have you tried to load the file serialized with v1.5?
Thanks.

Hello,

I only tried to de-serialize and use the model with Pytorch 1.1 and 1.8.

For Pytorch 1.2 , 1.3 , 1.5 , I only loaded and saved the model in order to have a more recent pt file format because I thought that the format was the problem.
To be clear, I loaded and save the model with this script (where gcn2_640x480_conv15.pt is the model on Pytorch 1.5 and gcn2_640x480_conv18.pt is the resulting model on pytorch 1.8):

import torch

orig_model = torch.jit.load(“gcn2_640x480_conv15.pt”)
orig_model.save(“gcn2_640x480_conv18.pt”)

For Pytorch 1.8 I tried to de-serialize and use the model and I got the error I described. I actually need to work with Pytorch 1.8 because the camera drivers I have are not compatible with older JetPacks.

Regards.

Hi,

We try to reproduce this, but the serialization works good in our environment.
Below is our testing details for your reference:

  • JetPack 4.5.1
  • PyTorch 1.8.0 from here
$ python3
Python 3.6.9 (default, Jan 26 2021, 15:33:00)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> orig_model = torch.jit.load("gcn2_640x480_conv18.pt")
>>>

Thanks.