A clear and concise description of the bug or issue.
Environment
TensorRT Version: 10.0 GPU Type: 4 Nvidia Driver Version: 535 CUDA Version: 12.4 CUDNN Version: 12.4 Operating System + Version: ubuntu server 22.04 Python Version (if applicable): python3 TensorFlow Version (if applicable): 2.16 PyTorch Version (if applicable): Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Please include:
Exact steps/commands to build your repro
Exact steps/commands to run your repro
Full traceback of errors encountered
ure_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-04-22 08:12:25.606617: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2024-04-22 08:12:26.270155: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at linux/Documentation/ABI/testing/sysfs-bus-pci at v6.0 · torvalds/linux · GitHub
2024-04-22 08:12:26.322857: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2251] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at Install TensorFlow with pip for how to download and setup the required libraries for your platform.
Skipping registering GPU devices…
Num GPUs Available: 0
root@server1:/opt# cat tensorflow/compiler/tf2tensorrt/utils/py_utils.cc |less
cat: tensorflow/compiler/tf2tensorrt/utils/py_utils.cc: No such file or directory
root@server1:/opt#