Add Gaussian Noise to CSI-2 Camera Stream with CUDA OpenCV leads to Low/Lagging FPS

Hi I am trying to add gaussian noise to CSI-2 GStreamer Camera Pipeline with OpenCV. Here is my approach:

#include <opencv2/cudafilters.hpp>
#include <opencv2/cudaimgproc.hpp>
#include <opencv2/cudaarithm.hpp>
#include <opencv2/opencv.hpp>
#include <opencv2/highgui.hpp>
using namespace cv;
using namespace std;
int main(){
const char* 	gst = "nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, framerate=30/1 ! "
				"nvvidconv ! video/x-raw, format=BGRx ! "
				"videoconvert ! video/x-raw, format=BGR ! "
				"appsink";
const char*  gst_writer = "appsrc ! video/x-raw, format=BGR ! "
		                  "videoconvert ! video/x-raw, format=BGRx, framerate=30/1 ! "
		                  "nvvidconv ! video/x-raw(memory:NVMM),format=I420 ! "
		                  "nvoverlaysink";
VideoCapture cap(gst);
unsigned int width=cap.get(CAP_PROP_FRAME_WIDTH);
unsigned int height=cap.get(CAP_PROP_FRAME_HEIGHT);
unsigned int fps=cap.get(CAP_PROP_FPS);
VideoWriter writer(gst_writer,cap.get(CAP_PROP_FOURCC),fps,Size(width,height));

Mat h_img;
cuda::GpuMat d_img;
//Create Gaussian Noise
Mat h_noise(height,width,CV_8UC4); //CV_8UC4=24
cuda::GpuMat d_noise(height,width,CV_8UC4);
float m=(10,12,34);
float sigma = (1,5,50);
while(1){
     cap.read(h_img);
     d_img.upload (h_img);
     cuda::cvtColor(d_img, d_img, COLOR_BGR2BGRA);
     //Carried out Gaussian Noise
     randn(h_noise,m,sigma);
     d_noise.upload (h_noise);
     cuda::add(d_noise,d_img,d_img);
     cuda::cvtColor(d_img, d_img, COLOR_BGRA2BGR);
     d_img.download(h_img);
     writer.write(h_img);
     if(waitKey(1)=='q'){
          break;
     }
}
cap.release();
writer.release();
return 0;
}

After doing so, the video display become extremely slow and lagging, which can not keep up with 30FPS. It seems that it is caused by noise adding algorithm. My question is how to improve slow FPS with gaussian noise added into Video Stream.

Thanks.

Hi,
Please refer to this sample:
Nano not using GPU with gstreamer/python. Slow FPS, dropped frames - #8 by DaneLLL

Hi DaneLLL,

Instead of applying Gaussian Blur Filter to the video as your post described, I actually want to manually add Gaussian noise to the video in real-time. I am assuming the lagging is caused by

randn(h_noise,m,sigma)

Since it uses random seed from CPU to generate Gaussian distribution in every loop. Is there any better way to do that directly on cuda::GpuMat ?

Hi,
There is concern of processing BGR data on Jetson platforms. Please check the discussion in
[Gstreamer] nvvidconv, BGR as INPUT - #2 by DaneLLL

So we would suggest process RGBA data. Performance will be better by eliminating RGBA <-> BGR conversion. But if it does not fit your use-case, you may try to un CPU cores at max clocks.

DeneLLL,

I do not think BGRA<->BGR conversion is the bottleneck in my case. It is generating gaussian random number on CPU then copy to GPU slows the algorithm in every loop. My question can actually turn into HOW to generate Gaussian Radom Number directly on GPU as GpuMat.

Hi,
We don’t have much experience about using the function. Not sure if it is possible to apply the effect to GpuMat directly. Please go to OpenCV forum to get further suggestion:
https://forum.opencv.org/

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.