Hello,I want to write deeplearning code running in GPU with tensorflow and pytorch on the my first TX2. And I want to know is it possible to port the code to a brand new TX2 without tensorflow and pytorch? It is complicated to build a deep learning environment on the brand new TX2 additionally. This is why I am asking for help.
Should I compile the tensorflow and pytorch? Or is it possible to use docker to implement it? Or is there any other suitable way?
Thanks a lot.
And another question is that I bought two TX2.
After I have finished my code in the first TX2.How can I copy the whole first TX1 environment as well as the code to the second TX2 instead of configuring tensorflow and pytorch environment on the second TX2 again.
You are probably thinking about cloning, but this won’t work for your case. In a clone the root partition is copied from one system to another. That clone must be for the same L4T release…a TX1 and TX2 rootfs are incompatible, and even a TX2 clone from an earlier release is very likely incompatible with the next release. You can clone to save a reference copy which is loopback mountable, and then selectively copy parts…but if for example there are different CUDA versions or different GPU architectures, then the software has to be rebuilt for the other GPU/version anyway.
Oh yes, what I want to say is cloning.I accidentally wrote the wrong word above.It is not “TX1” but “TX2”.I want to clone the program as well as the environment from one TX2 to another TX2.
Maybe I have found the answer in the following link.