Hello. I installed Ubuntu 22, it installed the video driver itself. Then I installed Python 3.11. Created a virtual environment.
python3 -m venv tensor
I installed Kuda and Tensorflow in this virtual environment tensor:
python3 -m pip install tensorflow[and-cuda]
Then I entered command for check:
python3 -c “import tensorflow as tf; print(tf.config.list_physical_devices(‘GPU’))”
And result was:
(tenzor) student@student-jhg-87hg:~/tensor1$ python3 -c “import tensorflow as tf; print(tf.config.list_physical_devices(‘GPU’))”
2024-03-22 13:35:01.194335: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-03-22 13:35:01.606221: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2024-03-22 13:35:01.869427: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at linux/Documentation/ABI/testing/sysfs-bus-pci at v6.0 · torvalds/linux · GitHub
2024-03-22 13:35:01.869738: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2251] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices…
I read on different sources that it need to install environment variables like this:
export CUDNN_PATH=“$HOME/.local/lib/python3.10/site-packages/nvidia/cudnn”
export LD_LIBRARY_PATH=“$CUDNN_PATH/lib”:“/usr/local/cuda/lib64”
…
export PATH=“$PATH”:"/usr/local/cuda/bin
But this variables need when Cuda installed in base system, not in a Python virtual environment. I need Cuda can “see” our GPU NVidia RTX 4060. Can you help how this variables can look in my case or say what to do?