I bought Jetson TX1 several months ago during the 50% off special sales action, and my expectations were the following:
- I wanted to use it as a strong GPU machine for neural network training and for GPU-enabled general ML libraries (such as H2O GPU), the main purpose was the fastest possible local ML training without cloud
- I didn’t intend to code on it, but run the Python scripts remotely and let them do their jobs within hours/days
So now after having spent lots of days and after updating it to JetPack 3.1 (this was a challenge alone doing it from Ubuntu VM on Mac) and configuring lots of different packages/apps/modules etc, I got stuck pretty much due to the following reasons:
- there is very little support for ARMv8 processor for precompiled libraries in the ML space, I can’t even install lots of Python libraries through “pip” (compilation and building errors)
- there is no real support from Nvidia for preconfigured ML packages and libraries, not even such as Tensorflow (this was my absolute expectation and requirement that at least common libraries are already compiled/built/configured in JetPack 3.1)
So now I am a bit lost and don’t really see a way forward… I would HUGELY appreciate any help and maybe a comment from official Nvidia representatives, why the support for the ML libraries is currently so bad and if it’s going to be changed in the near future. E.g. why don’t you make your already pre-built libraries from https://www.nvidia.com/en-us/gpu-cloud/ available also for Jetson? You can solve the ARMv8 compatibility issues much better than anyone else on the planet I guess. I am not asking too much - just the most important packages for ML, such as scikit-learn, tensorflow, pytorch, xgboost, keras (and ideally anaconda) - this is it as the very minimalistic pre-built set.
Thank you very much in advance!