Trying to get OpenCV (built with CUDA) working with FFMPEG

Current situation with Jetpack 4.5.1 (after initial installation):

  • OpenCV (Python) aged (4.1.1) without CUDA acceleration, no FFMPEG support
  • FFMPEG: no hardware accelerated decoding/encoding
    Meanwhile there is an Option to get FFMPEG with hardware accelerated decoding as decribed here.

There is also an easy way to get an up-to-date OpenCV (currently 4.5.3) with FFMPEG support using:
python3 -m pip install opencv-python
But this has no hardware accelaration at all (neither OpenCV nor the FFMPEG integration).

There is also an option to build OpenCV with CUDA support. You can find several scripts, I used an aption of this. But usually, all these builds end up without FFMPEG support, although all required libraries are found. Reason is, that the FFMPEG test build during the CMake build fails (propably due to some static libraries).

I was successful to get a CUDA accelerated build of OpenCV with FFMPEG, when I previously uninstalled all previous OpenCV and FFMPEG installations by:
python3 -m pip uninstall opencv-python
sudo apt-get --purge autoremove python-opencv
sudo apt --purge autoremove ffmpeg

Afterwards, I downloaded latest FFMPEG sources from https://ffmpeg.org/download.html#releases and build it with the following configuration:
./configure --enable-shared --disable-static --prefix=/usr/local
make
sudo make install

Afterwards you should include the library paths to ld config by:
sudo gedit /etc/ld.so.conf.d/ffmpeg.conf

In this file you add the paths to the ffmpeg libraries (assuming you built ffmpeg-4.4 under the folder /home/username/):
/home/username/ffmpeg-4.4/libavdevice
/home/username/ffmpeg-4.4/libavfilter
/home/username/ffmpeg-4.4/libavformat
/home/usernam/ffmpeg-4.4/libavcodec
/home/username/ffmpeg-4.4/libavformat
/home/username/ffmpeg-4.4/libswresample
/home/username/ffmpeg-4.4/libswscale
/home/username/ffmpeg-4.4/libavutil

Afterwards, update ld config by:
sudo ldconfig

You should also set LD_LIBRARY_PATH, e.g. by:
export LD_LIBRARY_PATH=/home/username/ffmpeg-4.4/libavdevice:/home/username/ffmpeg-4.4/libavfilter:/home/username/ffmpeg-4.4/libavformat:/home/username/ffmpeg-4.4/libavcodec:/home/username/ffmpeg-4.4/libavformat:/home/username/ffmpeg-4.4/libswresample:/home/username/ffmpeg-4.4/libswscale:/home/username/ffmpeg-4.4/libavutil:

With thes preparations, it should be possible to build OpenCV including CUDA support and FFMPEG. You can check this after the build and install finished, by:
python3
>>> import cv2
>>> print(cv2.getBuildInformation())

In my case, now it showed:
FFMPEG: YES
And also CUDA support was activated.

Previously, I tried it also with the hardware accelerated FFMPEG 4.2.2 build mentioned earlier (by adding --enable-shared to the configure command line), but in that case, the ffmpeg test build fails during CMake and so you end up with OpenCV without FFMPEG integration.

Now, after all this hard work and getting the confirmation for FFMPEG integration, I felt pretty lucky!
But the enthusiasm did not last long. Problem was, that this OpenCV build didn’t not work. As soon as you use cv2.VideoCapture(…), the program terminates with a segmentation fault (core dumped).

Building hardware accelerated OpenCV without FFMPEG support is quite easy. This typically happens, when you use a ffmpeg build in a different way than that described before for the pure (unaccelerated) version. It was working also hardware accelerated as expected - but without FFMPEG support.

In my case, I want to read from a RTSP stream. Using GStreamer instead of FFMPEG is possible with code similar to this example: Doesn't work nvv4l2decoder for decoding RTSP in gstreamer + opencv - #3 by DaneLLL
The problem is, that the resulting display looks much worse and distorted compared to a version using FFMPEG. As soon, as there is some further frame processing (even when using threading), the display lags behind with a continously increasing latency. All these problems do not occur when using a FFMPEG enabled OpenCV (even without hardware acceleration), so GStreamer is no option for me.

If you followed me up to this point and already made similar experiments, you can imagine, how much time I spent on this - still without success.

So, please NVIDIA:
Give us a Jetpack with hardware accelerated FFMPEG and a hardware accelerated OpenCV including FFMPEG support on the Jetson product line! I like GStreamer - but not in combination with OpenCV!

Or - if still not officially supported - I hope for someone else who could provide us a solid description how to bring these worlds together.

Hi,
For hardware acceleration in ffmpeg, please refer to Jetson Nano FAQ
[Q: Is hardware acceleration enabled in ffmpeg?]

The OpenCV package installed through SDKManager does not enable CUDA filter. Please run the script to manually enable it and re-build OpenCV:
JEP/install_opencv4.5.0_Jetson.sh at master · AastaNV/JEP · GitHub

By default we support hardware acceleration in jetson_multimedia_api and gstreamer. If you require OPenCV CUDA filter in your use-case, we suggest run gstreamer + OpenCV or jetson_multimedia_api + OpenCV to get optimal performance. Please refer to the samples:
Nano not using GPU with gstreamer/python. Slow FPS, dropped frames - #8 by DaneLLL
LibArgus EGLStream to nvivafilter - #14 by DaneLLL

Hi @DaneLLL ,
Thanks for your answer, but it more or less says the same as my post:

  • Hardware accelerated FFMPEG is achievable (at least for decoding)
    (also I gave links how to achieve it)
  • Scripts for generating CUDA enabled OpenCV
    (also I gave links to scrips for that)

The point is, that currently there seems to be no way to bring that together. Even CUDA enabled OpenCV with non-hardware accelerated FFMPEG does not work, as anyone following my description can reproduce.

This means, on Jetson you currently must live either with completely unaccelerated OpenCV + FFMPEG or with CUDA enabled OpenCV including deficient gstreamer support.

Yes, I also tried jetson multimedia API + OpenCV and it is working good but this produces solutions that are not compatible to other platforms and the usual way, how people implement AI and computer vision applications.

For me this is still frustrating and I see many, many other posts here from people desperately trying to get CUDA enabled OpenCV with FFMPEG - as you have it on almost any other platform. Seems to be not an exotic requirement and hopefully will be adressed in future Jetpack releases.

Rebuilding opencv to get CUDA support is common with Jetsons.
For FFMPEG support, having ffmpeg libraries installed should be enough for cmake configuring opencv with -D WITH_FFMPEG=ON.
For custom ffmpeg, you may try this (second part of the post, starting with ‘For answering to your question’).

Hi,
NVIDIA is investing on VPI and that is our go to computer vision and image processing library on Jetson platforms. VPI is optimized and can bring better performance. We would suggest users check documents and give it a try:
VPI - Vision Programming Interface: Installation

Therefore, we don’t enable certain functions in OpenCV by default. If it is required in your use-case, please manually re-build the package. Thanks.

@Honey_Patouceul
Thank you for your hint. Unfortunately, -D WITH_FFMPEG=ON alone does not the trick. If you end up with an OpenCV build including FFMPEG support depends on, if CMake was able to compile a little FFMPEG test build. There can be many reasons why this can fail (e.g. static libraries).

Nevertheless, your link was very helpful because it (together with many other hints here and there) helped me to figure out some additional prerequisites in order to get a successful OpenCV build including CUDA and FFMPEG.

For those who want to try their luck as well, I put together all my learnings here. Starting from a fresh Jetpack 4.5.1, you should be able to get OpenCV 4.5.3 with CUDA acceleration and FFMPEG 4.2.4 including the hardware acceleration patch from jocover. You find my description here: Hardware accelerated OpenCV 4.5.3 build with FFMPEG 4.2.4 on NVidia Jetson · GitHub

@DaneLLL
I understand that you want to push Jetson users into NVidia frameworks like VPI, but unfortunately, this has no Python binding. As I wrote before, I need a solution based on OpenCV and FFMPEG with good portability to other platforms and - if running on Jetson - utilizing it’s hardware acceleration.

Now, with the description given in my gist, I was successful.

Hi,
Thanks for the sharing. We may not be able to cover some use-cases and it would rely on community contribution. Thanks for enabling it and providing guidance.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.