Hi,
I have got a TX2 developer kit recently, and plan to do some image processing and deep learing jobs by it. But I heard that it’s recommanded to configure a server to work with the TX2 developer kit, so that the jobs can be worked better. However, i’m newer to image processing and deep learing.
Are there any advice on the server’s hardware specification to work with TX2?
On one hand, there is a PC needed to flash JetPack from (not necessarily doing deep learning training). The specs for that machine should be Ubuntu 16.04 x86_64 with at least 10GB of free disk space, and ideally any CUDA-capable discrete GPU for installing the host-side CUDA toolkit for compiling the CUDA samples. See here for the JetPack install docs with the system specs.
The machine that you use for deep learning training can be the same as you use for flashing JetPack to your Jetson, but it doesn’t have to be (for example, many folks train in the cloud via AWS or Azure). Typically the specs of an effective deep learning training system will include a fairly recent GPU (like from Kepler/Maxwell/Pascal/Volta family), ideally with 8GB GPU memory or more and depending on the size of your datasets, a decent-sized SSD (typically 512GB-1TB or more) along with 32/64GB system RAM. Ubuntu 16.04 or 14.04 is also common on the training machine. You can certainly use a lower-end training machine too, the training just will take longer (as long as there is some NV GPU in there - otherwise it will likely take unbearably long for training networks for tasks like image recognition, object detection, segmentation, ect.).
@dusty_nv:
Thanks for your reply. Are there any particular requirements on deep learning server’s CPU and GPU (series, arhitecture, frequency, ect.)? I affaid that the server doesn’t fit the jetson TX2 development needs.
Moreover, you may use the HostOS server of yours , though without NVIDIA GPU it won’t be performative. On the other hand, you may run basic deep learning tasks directly at Jetson , but do not expect real-time performance .
Thanks for your opinion.
Unfortunately, I have litte time to do the ‘estimate works’ for some reasons… And the top priority is to choose a server(flash JetPack and deep learning). So I want to know the necessary requirements for server’s CPU and GPU (series, arhitecture, frequency, ect.)?
Refer to the reply below, it seems the
GPU :
1. must be CUDA-capable
2. NVIDIA GPU perform better
CPU :
(none)
However, flash.sh method will work from any Unix device, no matter what architecture or how performative it is. I think one could use Mac Desktop and flash Jetson with flash.sh. Though I haven’t tried.
Moreover, the server for deep learning computations is not necessarily the same computer as the HostOS.
For example, given:
x86_64 Ubuntu 16.04 Desktop;
Jetson;
CUDA-capable Deep Learning server
The Jetson will be flashed from the HostOS Desktop, but Deep Learning you will perform at the CUDA-capable server. Then you will deliver trained networks to the Jetson.
The example above illustrates that 1 and 3 could be separate devices. However, 1 and 3 could be the same device as well.
Added note: You’ll never want a video card using less than 6GB of VRAM. 3GB models won’t do the job. 6GB will work for many jobs, but may still fall short at times.
Sure, GeForce GTX 1080 or 1080Ti would be very good too, most variants of 1080 have at least 8GB GPU memory and some up to 11GB. Personally I use a GeForce 1070M with 8GB GPU memory in my laptop for deep learning training.
As Andrey1984 suggested, Amazon AWS, Microsoft Azure N-series, or Google Cloud are good options for not having to make a large up-front investment. NVIDIA GPU Cloud (NGC)can help you deploy deep learning frameworks to these services with the click of a button (NGC can also be used to run locally).