Hi. I have tried every thing I found on the internet for 2 days, installing, uninstalling, rebooting, but I am stuck with this message when importing tensorflow in python 3.9:
2022-10-22 14:18:13.735020: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-10-22 14:18:14.196931: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2022-10-22 14:18:15.108880: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library âlibnvinfer.so.7â; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2022-10-22 14:18:15.108960: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library âlibnvinfer_plugin.so.7â; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2022-10-22 14:18:15.108968: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
I am on ubuntu2204, with a MX150 nvidia GPU. Iâve installed cuda 11.8, toolkit 11.8, cudnn 8.6, tensorrt 8.
Exact same issue here with Ubuntu 22.04, different graphics board. Canât get anything to use GPU. Tried at least a dozen recommendations on various threads.
Hi, first install TensorRT (download here), then copy file libnvinfer_plugin.so.8 or libnvinfer_plugin.so which in the TensorRT lib path to libnvinfer_plugin.so.7, it works good for me.
@tingbopku you may be able to use the workaround posted by another user. Create a symbolic link to the (incorrect) version library, and if youâre on WSL2 Linux like me, add or export the LD_LIBRARY_PATH. I did this:
This confused me for a while as well, so let me try to share what worked for me.
Downgrade tensorrt from v8 to v7 (because your TF is looking for v7).
Run pip install --upgrade "nvidia-tensorrt<8.0" to make sure that youâre installing the latest v7.x
TF reads the env variable LD_LIBRARY_PATH, to look for the file libnvinfer_plugin.so.7, we we need to update the env variable so TF knows where to look for it.
Make sure to replace {PATH_TO_YOUR_CONDA} with your actual path. For my system it is /home/tony/miniconda3.
Tips:
nvidia-tensorrt and tensorrt are the same. It is very confusing⊠Kind of like how if youâre installing the Scikit-Learn library you use conda install scikit-learn, but when in Python, you use import sklearn. They are aliases referring to the same thing.
Be very careful with which pip youâre using. There is a pip that will install to the site-packages of your systemâs Python, and there is another pip that will install to your Condaâs Python.
Assuming that you installed tensorrt (or nvidia-tensorrt) using pip from your Condaâs Python instead of the systemâs Python, it will be in the following directory: