Hello,
I’m trying to compare the GPU specs of TK1 and TX1 through OpenCV4Tegra’s mog2 algorithm and would like to know if my experiment results are correct.
When running code like below in a video loop,
the results were like this.
□ENV:
1 minute avi sample video which is 1920x1080@60fps.
So the processed frame in total is 3600.
I calculated the average processing time of each frame.
□RESULT:
Processing Time using mog2() function
TK1: 17.7355ms, TX1: 12.6975ms
Upload + Download time using d_frame.upload() and d_frame.download()
TK1: 11.6649ms, TX1: 5.004511ms
The result of the GPU memory upload + download time is as expected
But I was thinking that the processing time would be far more
less in the case of TX1 since TX1’s GFLOPS is more than 2.5 times larger than that of TK1.
Can someone help me out on whether this result would be correct especially the processing time?
Thank you.
Mat fgmask;
gettimeofday(&t1, NULL);
d_frame.upload(cap);
gettimeofday(&t2, NULL);
mog2(d_frame, d_fgmask, mog2_param.learningCoef);
gettimeofday(&t3, NULL);
d_fgmask.download(fgmask);
gettimeofday(&t4, NULL);
/* upload time */
elapsedTime = (t2.tv_sec - t1.tv_sec) * 1000.0;
elapsedTime += (t2.tv_usec - t1.tv_usec) / 1000.0;
cout << elapsedTime << ",";
/* processing time */
elapsedTime = (t3.tv_sec - t2.tv_sec) * 1000.0;
elapsedTime += (t3.tv_usec - t2.tv_usec) / 1000.0;
cout << elapsedTime << ",";
/* download time */
elapsedTime = (t4.tv_sec - t3.tv_sec) * 1000.0;
elapsedTime += (t4.tv_usec - t3.tv_usec) / 1000.0;
cout << elapsedTime << endl;