Installing Cuda in Python virtual environment: environment variables and other

Hello. I installed Ubuntu 22, it installed the video driver itself. Then I installed Python 3.11. Created a virtual environment.
python3 -m venv tensor
I installed Kuda and Tensorflow in this virtual environment tensor:
python3 -m pip install tensorflow[and-cuda]
Then I entered command for check:
python3 -c “import tensorflow as tf; print(tf.config.list_physical_devices(‘GPU’))”
And result was:

(tenzor) student@student-jhg-87hg:~/tensor1$ python3 -c “import tensorflow as tf; print(tf.config.list_physical_devices(‘GPU’))”
2024-03-22 13:35:01.194335: I tensorflow/core/platform/] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-03-22 13:35:01.606221: W tensorflow/compiler/tf2tensorrt/utils/] TF-TRT Warning: Could not find TensorRT
2024-03-22 13:35:01.869427: I external/local_xla/xla/stream_executor/cuda/] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at linux/Documentation/ABI/testing/sysfs-bus-pci at v6.0 · torvalds/linux · GitHub
2024-03-22 13:35:01.869738: W tensorflow/core/common_runtime/gpu/] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at Install TensorFlow with pip for how to download and setup the required libraries for your platform.
Skipping registering GPU devices…

I read on different sources that it need to install environment variables like this:
export CUDNN_PATH=“$HOME/.local/lib/python3.10/site-packages/nvidia/cudnn”
export LD_LIBRARY_PATH=“$CUDNN_PATH/lib”:“/usr/local/cuda/lib64”

export PATH=“$PATH”:"/usr/local/cuda/bin
But this variables need when Cuda installed in base system, not in a Python virtual environment. I need Cuda can “see” our GPU NVidia RTX 4060. Can you help how this variables can look in my case or say what to do?

Hello, I try to find out for myself how to setup tensorflow[and-cuda] correctly. I’ve the same question, how to set the path for CUDA and cuDNN correct.

  • System: Windows 11 > WSL2 > Ubuntu 22.04
  • CUDA and cuDNN (installed local on WSL2)
  • tensorflow[and-cuda] (installed at )
  • Currently TF 2.16 is running, but to be honest I don’t know why. This was a try and error approach and I want to find out how to install it properly.

Setting the path at .bashrc didn’t work for me, but when I set the path at runtime tensorflow is working. That’s what I did:

import os
from pathlib import Path
import nvidia.cudnn

# >>>> CUDA <<<
os.environ['PATH'] = '/usr/local/cuda/bin:' + os.environ['PATH']

# >>> cuDNN <<<
# Get the directory containing the nvidia.cudnn module
cudnn_path = Path(nvidia.cudnn.__file__).parent

# Ensure LD_LIBRARY_PATH is set and append the necessary directories
os.environ['LD_LIBRARY_PATH'] = os.getenv('LD_LIBRARY_PATH', '')

# Append conda and cudnn paths if they aren't already in LD_LIBRARY_PATH
if os.getenv('CONDA_PREFIX') + '/lib' not in os.environ['LD_LIBRARY_PATH']:
    os.environ['LD_LIBRARY_PATH'] += f':{os.getenv("CONDA_PREFIX")}/lib'
if str(cudnn_path / 'lib') not in os.environ['LD_LIBRARY_PATH']:
    os.environ['LD_LIBRARY_PATH'] += f':{cudnn_path}/lib'

My Questions:

  1. If I’m installing tensorflow[and-cuda] package, is it necessary to install CUDA and cuDNN locally?

In my understanding tensorflow[and-cuda]should install all necessary CUDA and cuDNN components in the environment. Is that correct?

  1. Path to CUDA and cuDNN

Setting the path to cuDNN in the env like described above is working (for me)

But for CUDA, only /usr/local/cuda/bin: on local installed CUDA is working (for me), resp. is there a way and if how, to set the pass to CUDA installed at the env, similar like i did it for cuDNN?

Any ideas or links?