NVidia Jetson Argus capturing low performance

Currently, I’m working at replacing cv::VideoCapture with libArgus and trying to adapt
09_camera_jpeg_capture example for my purposes.

Right now I’m getting ~520 frames in FullHD after 40 seconds, which means ~13 fps.
This is far away from desired 30 fps in FullHD .

The attached code does nothing with captured frames, but increments a counter in a loop.

Makefile (2.5 KB)
Rules.mk (4.1 KB)
test_3_v2.cpp (12.1 KB)

When the test application ends, Bus error (core dumped) message appears. I also tried
to enable FIFO mode hoping it’ll increase capturing performance, but it has no influence

What am I doing wrong? Please, suggest

Used platform:

  • Jetson Nano 4 Gb B01
  • JetPack 4.5.1 (L4T 32.5.1)
  • Arducam IMX 477 sensor
  • 4.9.201-tegra #1 SMP PREEMPT Sat May 8 01:13:10 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
        //NvBufferMemMap(m_dmabuf_2, 0, NvBufferMem_Read, &pdata);
        //NvBufferMemSyncForCpu(m_dmabuf_2, 0, &pdata);

        //cv::Mat imgbuf = cv::Mat( h, w, CV_8UC4, pdata );

        //if (!imgbuf.empty()) {

            //cv::Mat display_img;
                //cv::Mat display_img = imgbuf.clone();

            //cv::resize(imgbuf, display_img, cv::Size(), 0.25, 0.25, cv::INTER_LINEAR);
            //cv::imshow("img", display_img);
            //cv::waitKey(1);
            // **********************
        //}

If that’s uncommented, it could be part of the issue. You might try resizing on the ISP before you map it for the cpu and create a CV mat. That’s what I’ve done to maintain 60fps while still doing some trivial work in OpenCV. Also, you can reuse these buffers to avoid repeated reallocation. If you’re just looking to display some video, probably OpenCV is not going to be very performant no matter what you do.

I can’t paste what I’ve done since it’s proprietary, but I can tell you I used:

  • NvBufferCreateEx (to create a scratch buffer)
  • NvBufferGetParamsEx (to get some info from that buffer)
  • NvBufferMemMap (to map the scratch buffer)
  • ExtractFdFromNvBuffer (to get an fd from the in buffer)
  • NvBufferGetParams (to get parameters from the in fd)
  • NvBufferTransform (to convert and transform the buffer from in to the scratch)

And only at the end more or less what you did to create a low-res, greyscale, cv::Mat which is exactly what we needed for blob detection. Basically, the only way we were able to make it fast way by doing everything possible on the ISP before using OpenCV as the last step. We were also able to do all the processing in a separate worker thread so the playback wasn’t blocked (the result of the OpenCV calculation did not have to be synchronized with playback).

Forewarning: the documentation is pretty good but the examples are so-so and the interface is not nearly as nice as OpenCV. Less pure C and more C++ would be nice. You’ll likely have to read and experiment a lot before it works the way you want but once you get it working, I don’t think you can beat the performance. In our case, the preprocessing was basically “free”.

1 Like

Hi,
Looks like the sensor mode is 4032x3040p30. Please also try gstreamer command and check if yo can achieve target fps:

$ gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=4032,height=3040' ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0 -v

Thank you very much! You were absolutely right about the wrong mode, now I’m getting ~59 fps