Hi all,
I am working on the Leopard Imaging 3 cameras kit, managed to get live video output from cli and C++ OpenCV code but performances should be better according to the specs, so I might want to know how to retreive frames directly on GPU to save time.
On cli I use:
gst-launch-1.0 -ev \
nvcamerasrc sensor-id=0 \
! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' \
! nvvidconv \
! 'video/x-raw(memory:NVMM), format=(string)I420' \
! fpsdisplaysink text-overlay=false
which works fine @25fps.
I also managed to get two live output at the same time on cli using:
gst-launch-1.0 -ev \
videomixer name=mix sink_0::xpos=0 sink_1::xpos=640 ! fpsdisplaysink text-overlay=false \
nvcamerasrc sensor-id=0 \
! 'video/x-raw(memory:NVMM), width=(int)640, height=(int)480, framerate=(fraction)30/1, format=(string)I420' \
! nvvidconv ! 'video/x-raw, format=(string)I420' ! mix.sink_0 \
nvcamerasrc sensor-id=1 \
! 'video/x-raw(memory:NVMM), width=(int)640, height=(int)480, framerate=(fraction)30/1, format=(string)I420' \
! nvvidconv ! 'video/x-raw, format=(string)I420' ! mix.sink_1
but this one raises a warning and has like 1fps.
Warning raised:
/GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 5, dropped: 9, fps: 0.00, drop rate: 2.16
WARNING: from element /GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstAutoVideoSink:fps-display-video_sink/GstNvOverlaySink-nvoverlaysink:fps-display-video_sink-actual-sink-nvoverlay: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2854): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0/GstAutoVideoSink:fps-display-video_sink/GstNvOverlaySink-nvoverlaysink:fps-display-video_sink-actual-sink-nvoverlay:
There may be a timestamping problem, or this computer is too slow.
Finally, and most important, I used Peter Moran’s C++ code to measure fps of my setup with opencv.
Results are not so good since I only have 22fps@1080p and only 8fps@1080p when using 3 cameras.
The question I ask is, how can I improve these performances? Am I using bad pipelines? Is there a most efficient method to acheive this in C++? Is there a way to synchronize the streams so image stitching could be performed?
Thanks in advance, people here are awesome, keep being.
Find below Peter Moran’s test code, that I slightly modified to measure multiple cameras architecture (might be a little buggy…).
/*
Example code for displaying (and finding FPS of) gstreamer video in OpenCV.
Created by Peter Moran on 7/29/17.
Note
-------
FPS measurements are not fully accurate when displying the video.
*/
#include <opencv2/opencv.hpp>
#include <chrono>
typedef std::chrono::high_resolution_clock Time;
typedef std::chrono::duration<float> fsec;
std::string get_tegra_pipeline(int id, int width, int height, int fps) {
return "nvcamerasrc sensor-id=" + std::to_string(id) + " ! video/x-raw(memory:NVMM), width=(int)" + std::to_string(width) + ", height=(int)" +
std::to_string(height) + ", format=(string)I420, framerate=(fraction)" + std::to_string(fps) +
"/1 ! nvvidconv ! video/x-raw, format=(string)I420 ! appsink";
}
int main(int argc, char *argv[]) {
// Options
int WIDTH, HEIGHT, FPS, WINDOW_SIZE, DISPLAY_VIDEO, NB_CAMERAS;
if (argc < 7 || (argc > 1 && strcmp(argv[1], "-h") == 0)) {
std::cout << "usage:\n\t ./test_fps WIDTH HEIGHT FPS WINDOW_SIZE DISPLAY_VIDEO NB_CAMERAS" << std::endl;
return 0;
}
WIDTH = std::atoi(argv[1]);
HEIGHT = std::atoi(argv[2]);
FPS = std::atoi(argv[3]);
WINDOW_SIZE = std::atoi(argv[4]);
DISPLAY_VIDEO = std::atoi(argv[5]);
NB_CAMERAS = std::atoi(argv[6]);
std::cout << "Using parameters:\n\tWIDTH = " << WIDTH << "\n\tHEIGHT = " << HEIGHT
<< "\n\tFPS = " << FPS << "\n\tWINDOW_SIZE = " << WINDOW_SIZE << std::endl;
// Sanity check version
std::cout << "Running with OpenCV Version: " << CV_VERSION << "\n";
// Define the gstream pipeline
std::deque<std::string> pipelines;
for (int i = 0; i < NB_CAMERAS; i++) {
pipelines.push_back(get_tegra_pipeline(i, WIDTH, HEIGHT, FPS));
}
std::cout << "Using pipeline: \n\t" << pipelines.front() << "\n";
// Create OpenCV capture object, ensure it works.
std::deque<cv::VideoCapture> caps;
std::deque<cv::Mat> frames;
for (int i = 0; i < NB_CAMERAS; i++) {
caps.push_back(cv::VideoCapture(pipelines.at(i), cv::CAP_GSTREAMER));
if (!caps.back().isOpened()) {
std::cout << "Connection failed";
return -1;
}
frames.push_back(cv::Mat());
}
// Time reading speed
std::deque<double> frame_delas;
if (DISPLAY_VIDEO) {
for (int i = 0; i < NB_CAMERAS; i++) {
cv::namedWindow("Display window" + std::to_string(i), cv::WINDOW_AUTOSIZE);
}
}
int nbf = 0;
auto sloop = Time::now();
while (1) {
auto start = Time::now();
for (int i = 0; i < NB_CAMERAS; i++) {
caps.at(i) >> frames.at(i);
}
auto stop = Time::now();
fsec duration = stop - start;
double sec = duration.count();
double fps = (1.0 / sec);
if (frame_delas.size() >= WINDOW_SIZE) frame_delas.pop_front();
frame_delas.push_back(fps);
double avg_fps = accumulate(frame_delas.begin(), frame_delas.end(), 0.0) / frame_delas.size();
nbf++;
// Display frame
if (DISPLAY_VIDEO) {
for (int i = 0; i < NB_CAMERAS; i++) {
imshow("Display window" + std::to_string(i), frames.at(i));
cv::waitKey(1); //needed to show frame
}
}
auto eloop = Time::now();
fsec tduration = eloop - sloop;
double realfps = (1.0 / (tduration.count() / nbf));
std::cout << fps << "\t" << avg_fps << "\t" << nbf << "\t" << tduration.count() << "\t" << realfps << std::endl;
}
}
Compile from cli with :
gcc -std=c++11 test_fps.cpp -o test_fps -L/usr/lib -lstdc++ -lopencv_core -lopencv_highgui -lopencv_videoio