I have a custom ML model that was compiled into a .so library. I use python ctypes to access the .so library. This library needs to call many other .so files from openCV and other packages.
The documentation here: Enabling GPUs in the Container Runtime Ecosystem | NVIDIA Technical Blog says do NOT set the library path in nvidia/cuda containers, but even after I mount /usr/lib/x86)64-linux-gnu (the host directory where the .so files are) I get the same error.
I started to copy the needed .so files into my /app directory but stopped at about 85 files as it is obviously not the solution.
How can I include the libraries I need when using the nvidia/cuda base container?
This is my run command:
docker run -v /usr/lib/x86_64-linux-gnu/ --rm --runtime=nvidia --name v1 -p 5000:5000 yolov3-2
And here is the error stack for clarity:
Traceback (most recent call last):
File “./yolov3_app.py”, line 26, in
lib = CDLL(“./libyolo_dummy.so”, RTLD_GLOBAL)
File “/usr/lib/python3.6/ctypes/init.py”, line 348, in init
self._handle = _dlopen(self._name, mode)
OSError: libopencv_highgui.so.3.2: cannot open shared object file: No such file or directory