Libcublas file size is 0 in Jetson docker image


I am trying to build opencv 4.4.0 for jetson platform (cuda10.2) and just let me tell you that is a really pain to find the way to do this. Not the right way. Just the way…
And after a long time to search, i finally find your docker image for jetson but difficulties continues.
Why ?
Cause in your base image libcublas is not installed and there are no documentation which explain how to get it.
The lib is available on fresh install of the jet-pack on real jetson card. Why not in the base image and why it’s impossible to find a way to install it ?
After a long time, i try the ml image and discover than the libcublas is in the image.
I was so happy … until i discover than the file size is … 0 …

I get the following error during the build of opencv 4.4.0 from sources

876 /usr/lib/aarch64-linux-gnu/ file not recognized: File truncated
6877 clang: error: linker command failed with exit code 1 (use -v to see invocation)
6878 modules/cudev/CMakeFiles/opencv_cudev.dir/build.make:97: recipe for target 'lib/' failed
6879 make[2]: *** [lib/] Error 1
6880 CMakeFiles/Makefile2:3854: recipe for target 'modules/cudev/CMakeFiles/opencv_cudev.dir/all' failed
6881 make[1]: *** [modules/cudev/CMakeFiles/opencv_cudev.dir/all] Error 2

And to add to the pain, libs inside the /usr/local/cuda/lib64/stubs/ directory are not loaded and i must add their paths in the cmake command as follow while this work on a real hardware.

    cmake \
    -D CUDA_cuda_LIBRARY=/usr/local/cuda/lib64/stubs/ \
    -D CUDA_cufft_LIBRARY=/usr/local/cuda/lib64/stubs/ \
    -D CUDA_cufftw_LIBRARY=/usr/local/cuda/lib64/stubs/ \
    -D CUDA_curand_LIBRARY=/usr/local/cuda/lib64/stubs/ \

So please, could you tell me how i can solve the problem with le libcublas file without mount a volume from my jetson to the docker container to share this file ?

Edit: cudnn is also not installed and package available are only for cuda 11.X while on real hardware we have a version for cuda 10.2

Hi @daveau1, are you starting the container with --runtime nvidia? This will mount CUDA/cuDNN/TensorRT/ect into the container at runtime. For more info, see the run commands on these pages about the containers:


I am not running container.
I want to create a new image, based from yours, improved with opencv 4.4.0 compiled.
So i can’t do it on runtime. On runtime i need to have libraries already built.

OK, you need to set your docker daemon’s default-runtime to nvidia as shown here:

And then reboot your system or restart docker service. Then nvidia runtime will be used during docker build operations and you can use CUDA during building containers.