I am new to embedded technology and I have a TX2 kit but I have been having issues setting it up to do meaningful work.
First my host PC has no nvidia cards, is there a prerequisite that the host must also have NVIDIA card installed in other to complete flashing the TX2? The reason I asked was because whenever I flashed the kit using the host, some new drivers were installed on the on the host and at the end I am unable to boot into the host.
Secondly, is there a way to attach the Kit to my host machine such that I can take advantage of the NVIDIA card on the TX2 while working on my host machine?
Lastly, I found fastai course interesting but am limited with resources. Can I adapt the TX2 such that I can install most of the modules been used for deep learning in the class?
Hi abolade_babawale, you do not need an NVIDIA GPU installed on the host to flash with JetPack. By default, JetPack will attempt to install CUDA toolkit on the host, this is for cross-compiling the CUDA samples to the Jetson. This isn’t required however, you can disable it under the Host section if you are having problems. You’ll still want to have JetPack install CUDA toolkit to the target Jetson, which is also done by default.
You won’t be able to directly control the TX2’s integrated GPU from the host (like if you had a discrete GPU attached to host over PCIe). Typically what I do, is SSH into my Jetson from the host and mount the Jetson’s disk from the host using SSHFS. The applications are still running on the Jetson, but it makes cross-development easier.
I’m not familiar with fastAI course or what ML frameworks it uses, you can find a list of install guides here on the wiki though: https://elinux.org/Jetson_TX2#Deep_Learning
Thank you very much the moderator, you responses cleared most of the misconceptions I have had. I will update the forum soon with outcome of my next try.