So i’m currently trying to load a network model via torch on an Nvidia TX1. When I try to load the model
net = torch.load('modelfile.t7','ascii')
I get the following error:
The model loads fine on my Ubuntu 14.04 desktop, so I tried loading the same model, converting it to binary and then trying to load the converted file
net = torch.load('modelfile.bin')
But i still get a similar error:
I’ve noticed that a few people have had the same errors in the past but most people seem to have been able to get past this by using an ‘ascii’ version of the model since it’s platform independent (?). I seem to have had no luck with that. The other set of individuals who faced this problem were on a 32bit system. But my Nvidia TX1 is currently running on Ubuntu 16.04 (64bit).
For anyone willing or interested in recreating these results:
I installed JetPack (JetPack-L4T-2.3.1-linux-x64.run) and verified that my installation of CUDA 8.0 and OpenCV is functional.
For Torch, I used dusty-nv’s installation script
https://github.com/dusty-nv/jetson-reinforcementThe installation script in particular is https://github.com/dusty-nv/jetson-reinforcement/blob/master/CMakePreBuild.sh
It all looks pretty straightforward.
The model file in specific is https://s3.amazonaws.com/mc-cnn/net_kitti_fast_-a_train_all.t7
Any tips on how to fix this problem is gladly appreciated. If anyone has any ideas on how I can tweak the model on my desktop machine to make it work here I’d love to hear it.
Thanks in advance,