Hi, I’m new in visionworks. and I’m studying some advantange of visionworks compare to the other image process library, like opencv or cuda(in low-level perspective)
when our lab test visionworks in perspective power usage, it didn’t show to us better power efficiency compare to the opencv gpu module,
and in the image processing it show good performance only one case about canny edge function usage.
after that time, we knew that compare only one function performance is useless, because, we thought that the visionworks’s high performance is related to some stream processing and their graph structure.
today, when I test stream process, it show some performance difference between opencv gpu module.
we did normal canny edge filtering test like
- convert color to gray
- blurring image(we use box filter, because visionworks didn’t have bilateral filter and opencv 2.XX didn’t have gpu version of median filter)
- canny edge detection
In this test result,visionworks performance is better than opencv, but I don’t know how this result happened.
so my question is…
-
why the visionworks didn’t show high enery efficiency compare to the opencv when we test this in image processing?
-
why the visionworks streaming process(video process) is better performance than opencv?
(In my case in the 2K video, actual computation time is 3 times better than opencv ) -
is there any problem in my opencv code? and do this code problem make that performance difference?
#include "opencv2/opencv.hpp"
#include <iostream>
#include "opencv2/gpu/gpu.hpp"
using namespace cv;
using namespace std;
int main(int, char**)
{
VideoCapture cap("NORWAY 2K.mp4"); // open video
if(!cap.isOpened()) // check if we succeeded
return -1;
double fps = cap.get(CV_CAP_PROP_FPS);
// For OpenCV 3, you can also use the following
// double fps = video.get(CAP_PROP_FPS);
cout << "Frames per second using video.get(CV_CAP_PROP_FPS) : " << fps << endl;
//namedWindow("edges",1);
Mat frame;
gpu::GpuMat src;
Size ksize;
ksize.width =3;
ksize.height =3;
cv::gpu::GpuMat gray;
cv::gpu::GpuMat blurred;
cv::gpu::GpuMat edges;
for(;;)
{
const int64 startWhole = getTickCount();
cap >> frame; // get a new frame from camera
src.upload(frame);
cv::gpu::cvtColor(src, gray, CV_BGR2GRAY);
gpu::boxFilter(gray, blurred, -1,ksize);
gpu::Canny(blurred, edges, 120, 240, 3, false);
Mat edges_host;
edges.download(edges_host);
double timeSec = (getTickCount() - startWhole) / getTickFrequency();
std::cout << "Whole Time : " << timeSec << " sec" << std::endl;
imshow("edges", edges_host);
if(waitKey(30) >= 0) break;
}
return 0;
}
- I think that opencv gpu module is represented the common CUDA kernel process. so, Can I assume that the visionworks’s advantage in my today test is special characteristic of visionworks compare to the normal CUDA kernel process or Is that just weakness of opencv library? Is My Assumption right?