If I have some test video, I expect that opening it in OpenCV will return True in this code:
import cv2
# Define the video stream
cap = cv2.VideoCapture('test_video.mp4')
ret, frame = cap.read()
print(ret)
Outside of Docker, this works. However inside the container it doesn’t. The CUDA drivers are available, because I run the container with --runtime nvidia
I’ve tried compiling a custom OpenCV, installing GStreamer with following this guide.
A couple of other folk have had similar issues (here, here), but there doesn’t seem to be a solution yet.
Any thoughts on how I can either get OpenCV working here?
Hi @windy_hinger, when you compile OpenCV, you want to build it with -D WITH_GSTREAMER=ON option to enable GStreamer. You can check out the @mdegansOpenCV build script which enables GStreamer and CUDA/cuDNN.
Alternatively, you can install into your container the version of OpenCV that comes with JetPack (which already has GStreamer enabled) by adopting this example from this Dockerfile:
It was a tricky one, but yeah I needed to ensure that all of the FFPMEG dependencies were installed, GStreamer was installed as described above, OpenCV was build with those flags.
The final doozy was that opencv-python was installed earlier in the container build process, and it superseeded the one I had built. Uninstalling that meant that I was able to get it working.
Additional note: not tried from Docker, but your opencv videoCapture using ffmpeg (CPU only) may be slow on Jetson. You may consider using a pipeline leveraging HW decoder such as:
Unfortunately, for JPG encoding the accelerated plugins from NVIDIA use a diffent jpeg lib version and are are not compatible with the jpeg library used by opencv.
If you really need JPG encoding, you may try a cv::VideoWriter providing BGR to shmsink, and then have another process (may be gst-launch-1.0) that reads shmsrc and encodes using nvjpegenc, but not tried that.