Hi, I’m trying to use hardware acceleration for video encoding and decoding on Ubuntu18.04 with CUDA10.2, NVIDIA driver 470(recomended), opencv4.5.5, and I successfully compiled openCV with nvcuvid support. I can run example_gpu_hog smoothly, however, when I’m trying to read a video via example_gpu_video_reader, or use official example to read a video, it gives me following error, and I couldn’t find out any solution with google:
what(): OpenCV(4.5.5) /home/sway/opencvcuda/opencv-4.5.5/modules/core/src/cuda/gpu_mat.cu:121: error: (-217:Gpu API call) all CUDA-capable devices are busy or unavailable in function ‘allocate’
Is there anything I can do to handle this error?
In my experience, this error is occasionally the result of having an OpenGL context established on a GPU that is different than the one where the CUDA context is established. This could only be possible if you have more than one GPU in your system (including non-NVIDIA GPUs).
If that is the situation, then there are various articles discussing how to fix this. One typical approach is to use optimus profiles to force the behavior you desire (everything on the same GPU).
This recent thread also covers a similar idea.
Another possible reason is that you ahve exceeded the number of encode or decode streams that your GPU supports.
Thank you so much for the reply!
I tried the following line:
__NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia ./test_out
And it shows another exception:
terminate called after throwing an instance of ‘cv::Exception’
what(): OpenCV(4.5.5) /home/sway/opencvcuda/opencv-4.5.5/modules/core/src/matrix_wrap.cpp:111: error: (-213:The function/feature is not implemented) You should explicitly call download method for cuda::GpuMat object in function ‘getMat_’
At first I used the wrong FFMPEG, the one that I installed with apt-get, however, after I changed the FFMPEG to the right one (compiled with cuda, nvcuvid), I still got the same error.
I tried debug my program with QT, it stuck at the following line:
cv::Ptrcv::cudacodec::VideoReader d_reader = cv::cudacodec::createVideoReader(fname);
My GPU is GTX1060. Video enc/dec with FFMPEG+GPU works fine. Is there any other possibilities?
Sorry, I won’t be able to help you with OpenCV. That is not a product that is developed, maintained, or supported by NVIDIA. You are facing a different issue now. The error message seems plain enough:
It appears that the first problem is the actual problem. I have only one NVIDIA gpu on my laptop, and in my code, there’s one line:
There’s one default parameter in this function: int device = 0, which makes my GPU in use.
After commented this line, problem solved. Thank you again for your help!
my guess is the default parameter was selecting your integrated (intel) GPU. That would make sense with the error you reported.