Tensorflow is not recognising the gpu

Good Day…! The tensorflow library is not detecting the gpus.
Im using Nvidia geforce rtx 4090 gpu. Here is tensorflow and cuda details.
(base) sri@sri:~$ nvidia-smi
Fri Apr 5 21:50:14 2024
±----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.73.01 Driver Version: 552.12 CUDA Version: 12.4 |
|-----------------------------------------±-----------------------±---------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4090 … On | 00000000:01:00.0 Off | N/A |
| N/A 45C P8 6W / 60W | 478MiB / 16376MiB | 0% Default |
| | | N/A |
±----------------------------------------±-----------------------±---------------------+

±----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
±----------------------------------------------------------------------------------------+
(base) sri@sri:~$ pip show tensorflow
Name: tensorflow
Version: 2.16.1
Summary: TensorFlow is an open source machine learning framework for everyone.
Home-page: https://www.tensorflow.org/
Author: Google Inc.
Author-email: packages@tensorflow.org
License: Apache 2.0
Location: /home/sri/miniconda3/lib/python3.11/site-packages
Requires: absl-py, astunparse, flatbuffers, gast, google-pasta, grpcio, h5py, keras, libclang, ml-dtypes, numpy, opt-einsum, packaging, protobuf, requests, setuptools, six, tensorboard, tensorflow-io-gcs-filesystem, termcolor, typing-extensions, wrapt
Required-by:
When I try to run this python3 -c “import tensorflow as tf; print(tf.config.list_physical_devices(‘GPU’))”
Im getting the below output
2024-04-05 21:55:29.786047: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0.
2024-04-05 21:55:29.835334: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI AVX512_BF16 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-04-05 21:55:30.706071: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2024-04-05 21:55:31.661712: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2024-04-05 21:55:31.669885: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2251] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at GPU 支援  |  TensorFlow for how to download and setup the required libraries for your platform.
Skipping registering GPU devices…

Can you please let me know what is missing here?

Didyou install cuda with tensorflow or seperately?

Thanks for the response. I tried several things. Not really sure whether did I install CUDA separately or it came along with my laptop. If it is not a built-in thing, I might have installed separately. One thing that I remember is I downloaded NVIDIA drivers and installed them. Not really sure if it is part of that NVIDIA driver tool.

It would be really appreciable, if anyone could suggest the solution for this.

Hi,

I have the same problem. @sreekanth1984 did you find a solution ?

Thanks a lot,

Not really…I started using pytorch.