I’ve been trying to get this running for almost 2 months, so now I’m going to ask here.
Env details:
Python 3.8, Jetpack 4.6.3, Jetson Nano, Cuda 10.2.3, L4T 32.7.3, cuDNN 8.2.1.32
Up until yesterday, I’ve been locked in a state where I had Tensorflow installed and ‘working’. Each time if I asked Tensorflow if there was a GPU, such as here: Tensorflow not using GPU of Jetson nano - #2 by ALEEF02, it would report not. Thus, all my processing times were incredibly slow. From the above thread, I tried adding the following lines to my .bashrc
:
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
export PATH=/usr/local/cuda/bin:$PATH
Now, Tensorflow cannot find Cuda. It tries to open libcudart.so.11.0
, but fails, since Jetpack 4.6.3 has Cuda 10.2. I’ve tried removing those lines from the .bashrc
, source, reboot, but it’s still failing. I’ve tried reinstalling tensorflow, but I still get the same issue.
2 interesting notes on the reinstall.
A) I was using tensorflow-2.11.0+nv23.1 from https://developer.download.nvidia.com/compute/redist/jp/v51/tensorflow originally and didn’t even notice. How was that even working beforehand, since I’m on Jetpack 4.6.3?? I got Tensorflow to run using the JP5.1 version, but only on the CPU - but, regardless, it still ran and interpreted.
B) I tried installing the Jetpack 46 version (tensorflow-2.7.0+nv22.1) from Index of /compute/redist/jp/v46, but it fails since...is not a supported wheel on this platform
. What?? Am I not on Jetpack 4.6.3? I swear I am - especially since the Nano doesn’t support Jetpack 5(?)
What am I to do in this situation? I really need Tensorflow to run on the Jetson Nano’s integrated GPU, and soon! I am willing to upload a log of everything I’ve tried so far, as I’ve been noting it down. Any advice is appreciated, even if that includes reflashing.
Regards,
Anthony