TX2, speed up decoder when using cv::VideoCapture

On one single denver core of TX2 board, this function itself takes about 10ms. What’s the process of VideoCapture>>Mat, why does it take that long time? I know with more cores, it run faster, but now for some reason, this line of code is only allowed run on single core. (denver core is already the fastest core on TX2). Other cores will be used on other functions.

Is there an alternative that is faster than this line but same effect.

int main(int argc, char** argv) {
cpu_set_t cpuset;
int cpu = 1;
CPU_ZERO(&cpuset);
CPU_SET( cpu , &cpuset);
sched_setaffinity(0, sizeof(cpuset), &cpuset);

cv::VideoCapture cap;

cap.open("a video file");
cap.set(CV_CAP_PROP_CONVERT_RGB, true);
long long total_num = cap.get(CV_CAP_PROP_FRAME_COUNT);
float fps = cap.get(CV_CAP_PROP_FPS);

cv::Mat ori_frame;

for(int i=0; i<total_num; i++){ 

    float ms;
    auto t_begin = std::chrono::high_resolution_clock::now();

    cap >> ori_frame;

    auto t_end = std::chrono::high_resolution_clock::now();
    ms = std::chrono::duration<float, std::milli>(t_end - t_begin).count();
    printf("Main function: cap time************************************ %f ms\n", ms);

}
return 0;

}

Not 100% sure for your case, but I think the cap read would wait for a new frame to be available.
If you have no other processing it might just be waiting before reading.
Does it changes if you increase fps or add a timeout (cv::waitKey) ?

You may also read this article from eCon-systems and check their helper lib for using V4L2 userptr method.

Hi Honey_Patouceul,

Thanks for your reply.
For current test, I only use an existed video file, not from camera. So I guess it’s simpler than waiting from a frame to camera?

cap >> ori_frame
For above line, it should have two steps, 1). fetch a single frame stream, decode stream to image(RGB?). 2). re-organize the image(RGB) to cv::Mat format(copy all bytes to a new format may take time if prefetch/cache miss happens).

So is there some parameters in 1). allow faster decoder than default. 2). does cv::Mat allow some other formats that can transferred faster from decoded image in 1).

It’s hard to answer without knowing the source format. If you’re using H264 file for example, then you can try to use a gstreamer pipeline using omxh264dec instead of your case where the opencv codec runs on CPU.
It depends if you need RGB format at the end for your opencv processing…with gstreamer you would use videoconvert but it is loading CPU. If you can do your opencv processing on YUV format (I420 or NV12), then it can be faster.

Other users may have better solutions for your case.

Hi heyworld,
OpenCV use SW decoder. You can use gstreamer or MMAPI to leverage HW decoder. Below samples are for your reference:
[url]https://devtalk.nvidia.com/default/topic/1022543/jetson-tx2/gstreamer-nvmm-lt-gt-opencv-gpumat/post/5311027/#5311027[/url]
[url]https://devtalk.nvidia.com/default/topic/1047563/jetson-tx2/libargus-eglstream-to-nvivafilter/post/5319890/#5319890[/url]

The source in the samples is Argus(bayer camera source). You need to change to decoding(such as: filesrc ! h264parse ! omxh2364dec) in your usecase.