Leopard IMX377 unable to reach 30fps@1080p

Hi all,

I am working on the Leopard Imaging 3 cameras kit, managed to get live video output from cli and C++ OpenCV code but performances should be better according to the specs, so I might want to know how to retreive frames directly on GPU to save time.

On cli I use:

gst-launch-1.0 -ev \
    nvcamerasrc sensor-id=0 \
    ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' \
    ! nvvidconv \
    ! 'video/x-raw(memory:NVMM), format=(string)I420' \
    ! fpsdisplaysink text-overlay=false

which works fine @25fps.

I also managed to get two live output at the same time on cli using:

gst-launch-1.0 -ev \
    videomixer name=mix sink_0::xpos=0 sink_1::xpos=640 ! fpsdisplaysink text-overlay=false \
    nvcamerasrc sensor-id=0 \
    ! 'video/x-raw(memory:NVMM), width=(int)640, height=(int)480, framerate=(fraction)30/1, format=(string)I420' \
    ! nvvidconv ! 'video/x-raw, format=(string)I420' ! mix.sink_0 \
    nvcamerasrc sensor-id=1 \
    ! 'video/x-raw(memory:NVMM), width=(int)640, height=(int)480, framerate=(fraction)30/1, format=(string)I420' \
    ! nvvidconv ! 'video/x-raw, format=(string)I420' ! mix.sink_1

but this one raises a warning and has like 1fps.

Warning raised:

/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 5, dropped: 9, fps: 0.00, drop rate: 2.16
WARNING: from element /GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstAutoVideoSink:fps-display-video_sink/GstNvOverlaySink-nvoverlaysink:fps-display-video_sink-actual-sink-nvoverlay: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2854): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstAutoVideoSink:fps-display-video_sink/GstNvOverlaySink-nvoverlaysink:fps-display-video_sink-actual-sink-nvoverlay:
There may be a timestamping problem, or this computer is too slow.

Finally, and most important, I used Peter Moran’s C++ code to measure fps of my setup with opencv.
Results are not so good since I only have 22fps@1080p and only 8fps@1080p when using 3 cameras.

The question I ask is, how can I improve these performances? Am I using bad pipelines? Is there a most efficient method to acheive this in C++? Is there a way to synchronize the streams so image stitching could be performed?

Thanks in advance, people here are awesome, keep being.

Find below Peter Moran’s test code, that I slightly modified to measure multiple cameras architecture (might be a little buggy…).

/*
  Example code for displaying (and finding FPS of) gstreamer video in OpenCV.
  Created by Peter Moran on 7/29/17.

  Note
  -------
  FPS measurements are not fully accurate when displying the video.
*/

#include <opencv2/opencv.hpp>
#include <chrono>

typedef std::chrono::high_resolution_clock Time;
typedef std::chrono::duration<float> fsec;

std::string get_tegra_pipeline(int id, int width, int height, int fps) {
    return "nvcamerasrc sensor-id=" + std::to_string(id) + " ! video/x-raw(memory:NVMM), width=(int)" + std::to_string(width) + ", height=(int)" +
           std::to_string(height) + ", format=(string)I420, framerate=(fraction)" + std::to_string(fps) +
           "/1 ! nvvidconv ! video/x-raw, format=(string)I420 ! appsink";
}

int main(int argc, char *argv[]) {
    // Options
    int WIDTH, HEIGHT, FPS, WINDOW_SIZE, DISPLAY_VIDEO, NB_CAMERAS;

    if (argc < 7 || (argc > 1 && strcmp(argv[1], "-h") == 0))  {
        std::cout << "usage:\n\t ./test_fps WIDTH HEIGHT FPS WINDOW_SIZE DISPLAY_VIDEO NB_CAMERAS" << std::endl;
        return 0;
    }
    WIDTH = std::atoi(argv[1]);
    HEIGHT = std::atoi(argv[2]);
    FPS = std::atoi(argv[3]);
    WINDOW_SIZE = std::atoi(argv[4]);
    DISPLAY_VIDEO = std::atoi(argv[5]);
    NB_CAMERAS = std::atoi(argv[6]);
    std::cout << "Using parameters:\n\tWIDTH = " << WIDTH << "\n\tHEIGHT = " << HEIGHT
              << "\n\tFPS = " << FPS << "\n\tWINDOW_SIZE = " << WINDOW_SIZE << std::endl;

    // Sanity check version
    std::cout << "Running with OpenCV Version: " << CV_VERSION << "\n";

    // Define the gstream pipeline
    std::deque<std::string> pipelines;
    for (int i = 0; i < NB_CAMERAS; i++) {
        pipelines.push_back(get_tegra_pipeline(i, WIDTH, HEIGHT, FPS));
    }
    std::cout << "Using pipeline: \n\t" << pipelines.front() << "\n";

    // Create OpenCV capture object, ensure it works.
    std::deque<cv::VideoCapture> caps;
    std::deque<cv::Mat> frames;
    for (int i = 0; i < NB_CAMERAS; i++) {
        caps.push_back(cv::VideoCapture(pipelines.at(i), cv::CAP_GSTREAMER));
        if (!caps.back().isOpened()) {
            std::cout << "Connection failed";
            return -1;
        }
        frames.push_back(cv::Mat());
    }

    // Time reading speed
    std::deque<double> frame_delas;
    if (DISPLAY_VIDEO) {
        for (int i = 0; i < NB_CAMERAS; i++) {
            cv::namedWindow("Display window" + std::to_string(i), cv::WINDOW_AUTOSIZE);
        }
    }
    int nbf = 0;
    auto sloop = Time::now();
    while (1) {
        auto start = Time::now();

        for (int i = 0; i < NB_CAMERAS; i++) {
            caps.at(i) >> frames.at(i);
        }

        auto stop = Time::now();
        fsec duration = stop - start;
        double sec = duration.count();
        double fps = (1.0 / sec);
        if (frame_delas.size() >= WINDOW_SIZE) frame_delas.pop_front();
        frame_delas.push_back(fps);
        double avg_fps = accumulate(frame_delas.begin(), frame_delas.end(), 0.0) / frame_delas.size();
        nbf++;

        // Display frame
        if (DISPLAY_VIDEO) {
            for (int i = 0; i < NB_CAMERAS; i++) {
                imshow("Display window" + std::to_string(i), frames.at(i));
                cv::waitKey(1); //needed to show frame
            }
        }

        auto eloop = Time::now();
        fsec tduration = eloop - sloop;
        double realfps = (1.0 / (tduration.count() / nbf));
        std::cout << fps << "\t" << avg_fps << "\t" << nbf << "\t" << tduration.count() << "\t" << realfps << std::endl;
    }
}

Compile from cli with :

gcc -std=c++11 test_fps.cpp -o test_fps -L/usr/lib -lstdc++ -lopencv_core -lopencv_highgui -lopencv_videoio

Hi romain.pierson,

The current IMX377 driver only support 4104x3046@30fps. I may have problem if scaling it to 1080P with PC end software. We will do more testing on this. Since you have the driver source code, you can also add the 1080P to drivers if needed.

Hi Simon,

Indeed I was trying to get 1080p and even a 4024x3036 resolution from the cameras, because the spec says it is the number of active pixels, but with the resolution you suggested, it works really fine.

I used the following pipeline in my opencv code to resize the video to any size, using GPU and without losing any performance, even with three cameras.

nvcamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=(int)4104, height=(int)3046, format=(string)I420, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420 ! nvvidconv ! video/x-raw, format=(string)I420 ! appsink"

The resize pipeline allows me to reach 30 fps with an input source of 4104x3046 resized to any output size, without displaying the video stream (this is where it gets slower).
Don’t think I need to add 1080p to drivers atm.

Thanks for help.

romain - Why do you invoke nvvidconv twice? I have a different setup than yours with the IMX274 sensor, but the pipeline I use is one stage shorter:

nvcamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=(int)3864, height=(int)2174, format=(string)I420, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw, width=(int)1920, height=(int)1080, format=(string)I420 ! appsink

Hi sperok,

Sorry for responding this late.

I might be mistaken but I was thinking that using “memory:NVMM” allows the transformation to be materially accelerated or at least GPU-accelerated.

I did not time your method though, to see if its better than mine but mine allows 30 fps so I’m happy! (maybe I will benchmark it when I have time)

If anyone has insights concerning the real definition of the “memory:NVMM” keyword in gstreamer pipelines, I’ll take it.

Romain - Glad it is working for you. I’d also appreciate more detail on NVMM and its impact on performance. Everything we are doing is either high frame rate and/or high resolution so it is a constant concern.