OpenCV/Gstreamer Streaming Optimisation

Hello,

I am using the Jetson TX2 Devboard to read the video from the onboard CSI camera and stream it back to a client PC.

I first tried with Gstreamer using the following command :
gst-launch-1.0 nvarguscamerasrc ! nvvidconv flip-method=0 ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! omxh264enc control-rate=2 bitrate=10000000 ! 'video/x-h264, stream-format=(string)byte-stream' ! h264parse ! rtph264pay mtu=1400 ! udpsink host=<CLIENT_IP> port=5000 sync=false async=false
With this command, I can read the video on the Client PC, and the Jerson TX2 CPU usage is 10 to 20% wich is coherent with other topics I saw.

My goal is to use the jetson to do real-time video processing (object tracking, video stabilisation). Therefore I used this code :

include “iostream”
include “string”

include “opencv2/opencv.hpp”
include “opencv2/core.hpp”

int main()
{
// VideoCapture Pipe
std::string Cap_pipeline("nvarguscamerasrc ! "
"video/x-raw(memory:NVMM), width=1920, height=1080,format=NV12, framerate=30/1 ! "
“nvvidconv ! video/x-raw,format=I420 ! appsink”);

// VideoWriter Pipe
std::string Stream_Pipeline("appsrc is-live=true ! autovideoconvert ! "
"omxh264enc control-rate=2 bitrate=10000000 ! video/x-h264, stream-format=byte-stream ! "
“rtph264pay mtu=1400 ! udpsink host=<CLIENT_IP> port=5000 sync=false async=false”);

cv::VideoCapture Cap(Cap_pipeline,cv::CAP_GSTREAMER);
cv::VideoWriter Stream(Stream_Pipeline, cv::CAP_GSTREAMER,
framerate, cv::Size(display_width, display_height), true);

// check for issues
if(!Cap.isOpened() || !Stream.isOpened()) {
std::cout << “I/O Pipeline issue” << std::endl;
}

while(true) {
cv::Mat frame;
Cap >> frame; //read last frame
if (frame.empty()) break;

  cv::Mat bgr; 
  cvtColor(frame, bgr, CV_YUV2BGR_I420);
  
  //video processing
  
  Stream.write(bgr);// write the frame to the stream

  char c = (char)cv::waitKey(1);
  if( c == 27 ) break;

}

Cap.release();
Stream.release();

return 0;
}

With this code, the CPU usage is 40 to 60% which seems to be a big increase.

Did I write a wrong input or output pipeline? or is there no way to decrease the CPU usage with a similar code?

OpenCV version: 4.3.0 (instaled with JEP script : https://github.com/AastaNV/JEP/tree/master/script)
Jetpack version: 4.4 DeepStream

Thanks

Hi,
This is limitation of hardware converter. Please check the explanation in

For having buffers in BGR format, need to use CPU and will see some CPU loading.

1 Like

Hi @DaneLLL

I see, so if I understand well, this CPU usage increase is due to the BGR format, so if I want to use OpenCV, I cannot do it another way?

Thanks for your help

The first thing you can do is using BGRx instead of I420 as output of nvvidconv, and use videoconvert for BGRx → BGR. It may be a bit faster than converting in opencv. Reading I420 frames in opencv would only be better if you process your frames in Y(UV) format. If you need BGR (as most opencv algorithm expect), this would be better IMHO.

nvarguscamerasrc ! video/x-raw(memory:NVMM), width=1920, height=1080,format=NV12, framerate=30/1 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink

A better alternative would be using jetson-utils for getting frames RGB frames at high rate. See this example.