Could not load dynamic library ''

Hi. I have tried every thing I found on the internet for 2 days, installing, uninstalling, rebooting, but I am stuck with this message when importing tensorflow in python 3.9:

2022-10-22 14:18:13.735020: I tensorflow/core/platform/] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-10-22 14:18:14.196931: E tensorflow/stream_executor/cuda/] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2022-10-22 14:18:15.108880: W tensorflow/stream_executor/platform/default/] Could not load dynamic library ‘’; dlerror: cannot open shared object file: No such file or directory
2022-10-22 14:18:15.108960: W tensorflow/stream_executor/platform/default/] Could not load dynamic library ‘’; dlerror: cannot open shared object file: No such file or directory
2022-10-22 14:18:15.108968: W tensorflow/compiler/tf2tensorrt/utils/] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.

I am on ubuntu2204, with a MX150 nvidia GPU. I’ve installed cuda 11.8, toolkit 11.8, cudnn 8.6, tensorrt 8.

What can I do please?



I am having the same issue. Did you find any solution? I tried to install libnvinfer related packages. But they can’t be located.


Exact same issue here with Ubuntu 22.04, different graphics board. Can’t get anything to use GPU. Tried at least a dozen recommendations on various threads.


Hi, first install TensorRT (download here), then copy file or which in the TensorRT lib path to, it works good for me.


Could you tell me where is?

1 Like

I mostlmy have the same issue.
Is it possible to get some support ?

@tingbopku you may be able to use the workaround posted by another user. Create a symbolic link to the (incorrect) version library, and if you’re on WSL2 Linux like me, add or export the LD_LIBRARY_PATH. I did this:

(kohya) nano@DESKTOP-73RPGPM:~/kohya_ss$ find / -name

sudo ln -s /home/nano/anaconda3/envs/kohya/lib/python3.10/site-packages/tensorrt/ /home/nano/anaconda3/envs/kohya/lib/python3.10/site-packages/tensorrt/
sudo ln -s /home/nano/anaconda3/envs/kohya/lib/python3.10/site-packages/tensorrt/ /home/nano/anaconda3/envs/kohya/lib/python3.10/site-packages/tensorrt/
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/nano/anaconda3/envs/kohya/lib/python3.10/site-packages/tensorrt/

Thanks to xxy1836 for the workaround.


This method worked for me but then the next time i ran it the same error popped up without changing a thing. Can you help on that?

ln: failed to create symbolic link ‘/home/dextercorley/miniconda3/envs/tf/lib/python3.9/site-packages/tensorrt/’: File exists

It seems you’re doing somethimg wrong.
I solve it by:

  1. pip install tensoRT
  2. making symlinks:
    ln -s /home/yurimaru/miniconda3/envs/tf/lib/python3.10/site-packages/tensorrt/ /home/yurimaru/miniconda3/envs/tf/lib/
    ln -s /home/yurimaru/miniconda3/envs/tf/lib/python3.10/site-packages/tensorrt/ /home/yurimaru/miniconda3/envs/tf/lib/

(Symlinks always works like: ln -s [where the exactly file exists] [where to place a link to your file]

So check if you not forget to erase “sudo”. It might make another error when using under normal user.

This confused me for a while as well, so let me try to share what worked for me.

  • Downgrade tensorrt from v8 to v7 (because your TF is looking for v7).
    • Run pip install --upgrade "nvidia-tensorrt<8.0" to make sure that you’re installing the latest v7.x
  • TF reads the env variable LD_LIBRARY_PATH, to look for the file, we we need to update the env variable so TF knows where to look for it.
  • export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:{PATH_TO_YOUR_CONDA}/envs/base/lib/python3.9/site-packages/tensorrt
    • Make sure to replace {PATH_TO_YOUR_CONDA} with your actual path. For my system it is /home/tony/miniconda3.


  • nvidia-tensorrt and tensorrt are the same. It is very confusing… Kind of like how if you’re installing the Scikit-Learn library you use conda install scikit-learn, but when in Python, you use import sklearn. They are aliases referring to the same thing.
  • Be very careful with which pip you’re using. There is a pip that will install to the site-packages of your system’s Python, and there is another pip that will install to your Conda’s Python.
1 Like

Assuming that you installed tensorrt (or nvidia-tensorrt) using pip from your Conda’s Python instead of the system’s Python, it will be in the following directory:


  • Remember to change the two variables to what you have on your system.
  • If you’re using any other version of python (i.e., python3.7, python3.6), then change the chunk python3.8 to match what you have on your system.
1 Like

Welcome to the NVIDIA developer forums @anthony8lee!

Thank you for this clear explanation! I really appreciate the community help.

As an additional tip, we also have a dedicated TensorRT category here on this server, in case anyone has more questions.


1 Like

Thank you! Didn’t know that, but now I do!

I have the same problema however I could not resolve, I post my question in Could not load dynamic library ‘’