Our application is running on both Windows and Linux. On Windows as a regular application (where it still works), and on Linux through Docker (where it’s now broken).
This was working fine for many years. Our docker container used the Nvidia base image nvidia/cudagl:11.1-base-ubuntu20.04
and everything just worked.
Recent changes to our build requirements for other, unrelated products required us to upgrade the docker OS to Ubuntu 22.04.
Seeing as the cudagl images are no longer maintained, it looked like we could just use a basic Ubuntu image as the base of our container, and let the Nvidia Container Toolkit manage everything for us.
So that’s what I tried: FROM ubuntu:22.04
Our application can see the host GPU just fine, and has access to CUDA.
OpenGL rendering works just fine.
Video decoding with nvDec also works just fine.
However, calling nvEncOpenEncodeSessionEx
fails. For reasons that I don’t understand.
I tried to use nvidia/cuda:12.0.0-base-ubuntu22.04
as the base for our docker container, but that made no difference.
So I’m a little lost as to what I’m doing wrong.
Should nvEnc just work out of the box like OpenGL and nvDec? Or should I be doing something extra in my dockerfile (or how we run the container) to ensure that it works as before?
And if it should just work, how can I figure out what the problem is? The device shouldn’t be invalid. It’s the same device that worked before and continues to work for rendering and video decoding. (Nothing else has changed in that system for at least two years.)
(I should also point out that everything still works fine outside of Docker.)
Cheers,
James