Version Inconsistency

I recently purchased a NVIDIA Jetson Nano developer kit (4GB LPDDR4 Memory) and am trying to train a model on my host machine using Ubuntu 20.04 to later run on my nano.

The host machine has the following specifications:
CPU: Intel Core i7-7700HQ CPU @ 2.80GHz x 8
Memory: 32GB RAM
GPU: NVIDIA GeForce GTX 1050 w/ 4GB GDDR5 Graphics memory

When I began training on my host machine, tensorflow-gpu does not recognize my gpu and proceeds to train my model using my cpu instead. I attempted to fix this issue by following the instructions on the tensorflow website (
I downgraded my OS to Ubuntu 18.04, and followed the instructions to download nvidia-driver-450 and CUDA 10.1. Upon doing so, CUDA automatically updated to version 11 which is incompatible with my model.

Is there a way to version-lock CUDA to 10.1? Or will I have to compile from source to solve this issue?


Jetsons can only use the CUDA release which comes with the flash software. However, your PC can use multiple versions simultaneously. Even if CUDA 11 installed, if CUDA 10.1 is now missing, then you can add this yourself and have both. Take a look at “/usr/local/”, and see what CUDA directories are there. There should be a versioned directory named after the particular CUDA version, plus a non-versioned symbolic link. The symbolic link version tends to be the one used by default when no version is specified. Perhaps both are still there? Check:
ls -ld /usr/local/cuda*

If it isn’t there, then check this out:

This may not be the particular version you want, but if you go here and set up options, then it will list some different package types you can use:

The “.run” files are just bash scripts and packages won’t be automatically upgraded or replaced. The “.deb” packages could have a hold placed on them. Basically, if you know the name of a package, then something like:
sudo apt-mark hold <package name>