Get CUDA driver version is insufficient for CUDA runtime version error after install bazel

My cuda environment in Jetson Nano worked OK, and haven’t changed anything related to cuda.
After installing some dependencies for compiling tensorflow, my cuda driver broked.


./deviceQuery
./deviceQuery Starting…

CUDA Device Query (Runtime API) version (CUDART static linking)

cudaGetDeviceCount returned 35
→ CUDA driver version is insufficient for CUDA runtime version
Result = FAIL


Can anyone help?
Is there any way to update the driver without flashing the whole system?
This happened to me several times, many installation through apt-get install command can break cuda driver. I flashed the whole system every time till now.

Thanks!

The CUDA version on the Nano is special to the Jetson line (and extra special to the Nano, as it’s a fairly old GPU flavor.)
If your updates update the CUDA scripts/tools/versions, then it’s likely they will break the working pre-installed tools/versions.

This will happen when some tool says “I require CUDA version one bazillion” and then the maintainers just use the build scripts to build the packages, and the packages dutifully pull version one bazillion, and then it doesn’t actually work with the Jetpack or Nano hardware. This is largely a packaging problem – you shouldn’t be able to select things that won’t work on the Nano in the package manager, or in config scripts, if everyone did their job perfectly. (They don’t. It’s open source.)

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.