L4T Docker Container OpenCV issue

I’m having some issues with OpenCV inside a Docker container on my board.

I’m using the Jetpack enabled base image: https://ngc.nvidia.com/catalog/containers/nvidia:l4t-tensorflow

If I have some test video, I expect that opening it in OpenCV will return True in this code:

import cv2
# Define the video stream
cap = cv2.VideoCapture('test_video.mp4')
ret, frame = cap.read()

Outside of Docker, this works. However inside the container it doesn’t. The CUDA drivers are available, because I run the container with --runtime nvidia

I’ve tried compiling a custom OpenCV, installing GStreamer with following this guide.

A couple of other folk have had similar issues (here, here), but there doesn’t seem to be a solution yet.

Any thoughts on how I can either get OpenCV working here?

Hi @windy_hinger, when you compile OpenCV, you want to build it with -D WITH_GSTREAMER=ON option to enable GStreamer. You can check out the @mdegans OpenCV build script which enables GStreamer and CUDA/cuDNN.

Alternatively, you can install into your container the version of OpenCV that comes with JetPack (which already has GStreamer enabled) by adopting this example from this Dockerfile:

Hi there dusty_nv.

It was a tricky one, but yeah I needed to ensure that all of the FFPMEG dependencies were installed, GStreamer was installed as described above, OpenCV was build with those flags.

The final doozy was that opencv-python was installed earlier in the container build process, and it superseeded the one I had built. Uninstalling that meant that I was able to get it working.

Additional note: not tried from Docker, but your opencv videoCapture using ffmpeg (CPU only) may be slow on Jetson. You may consider using a pipeline leveraging HW decoder such as:

cap = cv2.VideoCapture('filesrc location=test_video.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! queue ! video/x-raw, format=BGR ! appsink', cv2.CAP_GSTREAMER)
1 Like

Thanks @Honey_Patouceul. Do you happen to have a pipeline for using the hardware encoder , for if we were using something like?

    fourcc = cv2.VideoWriter_fourcc('M','J','P','G')
    fps = 30
    out = cv2.VideoWriter(output_video_name, fourcc,
                          fps, (input_size, input_size))

Unfortunately, for JPG encoding the accelerated plugins from NVIDIA use a diffent jpeg lib version and are are not compatible with the jpeg library used by opencv.

For a H264 encoding, you may just use:

VideoWriter gst_omxh264_writer("appsrc ! queue ! videoconvert ! video/x-raw,format=I420 ! queue ! omxh264enc ! video/x-h264,format=byte-stream ! matroskamux ! filesink location=test-omxh264-writer.mkv ", cv::CAP_GSTREAMER, 0, fps, cv::Size (width, height));

If you really need JPG encoding, you may try a cv::VideoWriter providing BGR to shmsink, and then have another process (may be gst-launch-1.0) that reads shmsrc and encodes using nvjpegenc, but not tried that.