Dear All,
I have CUDA 12.1 Up1
TensorFlow 2.16
TensorRT for 12.0-12.1
Ubuntu 22.04
CuDNN for Cuda 12
GTX 1650Ti
LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/lib/python3.10/dist-packages/tensorrt:/usr/local/cuda/targets/x86_64-linux/lib/:/usr/local/cuda/extras/CUPTI/lib64:
PATH=/usr/local/cuda/bin:/home/luisgo/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/snap/bin
And TensorFlow is not detecting the GPU. See message below.
Could you please help me?
2024-03-16 10:00:43.582660: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-03-16 10:00:44.252382: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2024-03-16 10:00:46.298881: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at linux/Documentation/ABI/testing/sysfs-bus-pci at v6.0 · torvalds/linux · GitHub
2024-03-16 10:00:46.299337: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2251] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at Suporte a GPUs | TensorFlow for how to download and setup the required libraries for your platform.
Skipping registering GPU devices…
[name: “/device:CPU:0”
device_type: “CPU”
memory_limit: 268435456
locality {
}
incarnation: 14397214412490094656
xla_global_id: -1
]
Thanks,
Luís Gonçalves