can we transplant the openface on tk1

hi.

now we transplant openface on tk1,and got very much problem.

is Feasible to do it ?

and is there any help for us on it .

Thank you very much.

There is our error log.
please help us .thank you very much.

Subject: openface
root@tegra-ubuntu:~/openface/openface# ./demos/compare.py images/examples/{lennon*,clapton*}
/home/ubuntu/torch/install/bin/luajit: /home/ubuntu/torch/install/share/lua/5.1/torch/File.lua:370: table index is nil
stack traceback:
/home/ubuntu/torch/install/share/lua/5.1/torch/File.lua:370: in function ‘readObject’
/home/ubuntu/torch/install/share/lua/5.1/nn/Module.lua:158: in function ‘read’
/home/ubuntu/torch/install/share/lua/5.1/torch/File.lua:351: in function ‘readObject’
/home/ubuntu/torch/install/share/lua/5.1/torch/File.lua:409: in function ‘load’
…lib/python2.7/dist-packages/openface/openface_server.lua:46: in main chunk
[C]: in function ‘dofile’
…untu/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x0000cff9
Traceback (most recent call last):
File “./demos/compare.py”, line 101, in
d = getRep(img1) - getRep(img2)
File “./demos/compare.py”, line 92, in getRep
rep = net.forward(alignedFace)
File “/usr/local/lib/python2.7/dist-packages/openface/torch_neural_net.py”, line 156, in forward
rep = self.forwardPath(t)
File “/usr/local/lib/python2.7/dist-packages/openface/torch_neural_net.py”, line 113, in forwardPath
“”".format(self.cmd, self.p.stdout.read()))
Exception:

OpenFace: openface_server.lua subprocess has died.

Diagnostic information:

cmd: [‘/usr/bin/env’, ‘th’, ‘/usr/local/lib/python2.7/dist-packages/openface/openface_server.lua’, ‘-model’, ‘/home/ubuntu/openface/openface/demos/…/models/openface/nn4.small2.v1.t7’, ‘-imgDim’, ‘96’]

============

stdout:

Do you have a URL describing requirements for openface and torch?

The note about 64-bit makes me think this is for x86_64 only…JTK1 is ARMv7 32-bit. If this is the case, it is often possible to recompile and have a recompiled binary work (although the amount of hoops one must jump through to do so differs to the extreme among various software). On the other hand, if it requires something like a 64-bit python interpreter, then you’re probably out of luck.

Thank you for your reply.

but at here.

someone has do it success.

It’s interesting that if the R24.1 kernel is cross-compiled with anything newer than the 4.8 gcc version which is in the “baggage” directory of the documention, that this also results in an illegal instruction error (this is true for the base kernel image, not necessarily true for modules loaded into the kernel). I’ve tested Linaro 4.9, 5.2, and 5.3 versions, all of which otherwise seem to produce good kernels and code in other cases (no illegal instruction upon kernel load). I’m wondering if the software giving the illegal instruction would work if using that 4.8 compiler? If for example the code in question were python, you might need parts of python compiled on the 4.8 compiler…but I know very little about the software you’re using, I don’t have a solid answer.

I’m very happy get your reply.
I will continue to try, thank you for your answer.

We just compiled the python.We don’t compile the kernel

What I was getting at is that the compiler most people would use is not the 4.8 compiler from the driver documentation. The other compilers have produced illegal instruction when trying to run the base kernel from it, I’m thinking perhaps the 4.8 compiler which gets around that issue for kernel build might also get around it for building any interpreters or other software. E.G., the tools that build the python byte code were probably themselves compiled from C or C++, and the 4.8 compiler could cause them to change their byte code output…hopefully to something working.

Did you ever get this issue resolved? I just tried installing open-face and am running into the same issue.

Hi,

It’s recommended to use TX1/TX2 for deep learning use-case.
Lots of deep learning libraries require 64-bit OS, such as torch used here.
But TK1 is still on 32-bit Ubuntu.

Thanks and sorry for the inconvenience.