OpenCV H264 decoder high CPU usage

Hi All,

I recently got a JETSON NANO and am trying to get hardware to decode a H264 video stream. I have JETPACK installed and all OpenCV applications compile and run.

The code in CPP is simple. open a RTSP Stream and run…

//open the video file for reading
VideoCapture cap(STREAM);
double fps = cap.get(CAP_PROP_FPS);
cout << "Frames per seconds : " << fps << endl;
while (true)
{

bool bSuccess = cap.read(frame); // read a new frame from video

//Breaking the while loop at the end of the video
if (bSuccess == false)
{
cout << “Found the end of the video” << endl;
cap.release();
cap.open(STREAM);
continue;
}
}

From the top, a high amount of CPU is used considering there is supposed to be hardware. Its a1080p 25 Frames/sec RTSP video stream from a camera. From the Jetpack/OpenCV documentation, it states that OpenCV should be hardware accelerated by default.

Is this expected? … I am sure I am missing a trick.

thanks in advance,
Richard

Sorry I have almost no experience with opencv version provided in JetPack, but it has nothing accelerated (no CUDA support), only gstreamer support that can give access to HW accelerated plugins.

Guessing that your STREAM is a just an URI, the reason might be that, without any other API specified, opencv videoio would use FFMEG backend. But the apt version of ffmpeg has no accelaration support for Jetson.

So, (assuming your stream is H264 encoded) the simplest would be to use a gstreamer pipeline with HW accelerated H264 decoder as input:

#include <iostream>
#include <opencv2/core.hpp>
#include <opencv2/videoio.hpp>


int main (void)
{
  /* Create input pipeline */
  const char *gst_cap =
    "rtspsrc location=rtspt://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_175k.mov"
    " ! application/x-rtp, media=video, encoding-name=H264"
    " ! queue ! rtph264depay ! h264parse ! omxh264dec ! nvvidconv ! video/x-raw,format=BGRx "
    " ! videoconvert ! video/x-raw, format=BGR ! appsink ";
  cv::VideoCapture cap (gst_cap, cv::CAP_GSTREAMER);
  if (!cap.isOpened ()) {
    std::cout << "Error: Cv::VideoCapture.open() failed" << std::endl;
    return 1;
  }
  else
    std::cout << "Cam opened  (backend: " << cap.getBackendName () << ")" << std::endl;

  unsigned int width = cap.get (cv::CAP_PROP_FRAME_WIDTH);
  unsigned int height = cap.get (cv::CAP_PROP_FRAME_HEIGHT);
  float fps = cap.get (cv::CAP_PROP_FPS);
  std::cout << "Framing : " << width << " x " << height << "@" << fps << " FPS" << std::endl;


  /* Create output pipeline */
  const char *gst_2nvegl = 
    "appsrc ! video/x-raw, format=BGR ! queue"
    " ! videoconvert ! video/x-raw, format=RGBA "
    " ! nvvidconv ! nvegltransform ! nveglglessink ";
  cv::VideoWriter writer (gst_2nvegl, cv::CAP_GSTREAMER, 0, fps, cv::Size (width, height));
  if (!writer.isOpened ()) {
    std::cout << "Error: Cv::VideoWriter.open() failed" << std::endl;
    return 2;
  }
  else
    std::cout << "Writer opened" << std::endl;


  /* Loop */
  cv::Mat frame_in (width, height, CV_8UC3);
  for (;;) {
    if (!cap.read (frame_in)) {
      std::cout << "Capture read error" << std::endl;
      break;
    }


    /* Process BGR frame */


    writer.write (frame_in);
  }

  writer.release ();
  cap.release ();
  return 0;
}

Alternately, but this would be a bit more complicated, you may try to build your own ffmpeg with jocover’s patch.
I successfully used this script a few time ago for building on AGX Xavier (you may just use make -j4 on Nano):

#!/usr/bin/env bash
set -e

cd /home/nvidia/Desktop

# Build and install library for ffmpeg
git clone https://github.com/jocover/jetson-ffmpeg.git
cd jetson-ffmpeg
cp /usr/src/jetson_multimedia_api/include/nvbuf_utils.h include
mkdir build
cd build
cmake ..
make -j8
sudo make install 

sudo ldconfig

# Build and install ffmpeg
git clone git://source.ffmpeg.org/ffmpeg.git -b release/4.2 --depth=1
cd ffmpeg
wget https://github.com/jocover/jetson-ffmpeg/raw/master/ffmpeg_nvmpi.patch
git apply ffmpeg_nvmpi.patch
sudo apt-get update
sudo apt-get install libcanberra-gtk-module

./configure --toolchain=hardened --libdir=/usr/lib/aarch64-linux-gnu --incdir=/usr/include/aarch64-linux-gnu --enable-nvmpi --enable-opengl --enable-libdrm --enable-shared 
make -j8
sudo make install

Then build opencv from source, it should find ffmpeg (check your cmake configure log) :

or:

Rebuild your application and retry, you should see improvement with ffmpeg backend for this case.

Using NVIDIA’s version of ffmpeg for opencv would be ideal, but it may rise some issues…not investigated much further, maybe @DaneLLL or @AastaLLL or someone else has succeeded in this and may share.

1 Like

Hi,
For reference, please check Jetson Nano FAQ
Q: Is hardware acceleration enabled in ffmpeg?