Hi cstotts,
I thought the code with VideoCapture should be straightforward. Anyway, it is mostly attached as follows. Note that the GPU version is not supported by TK1.
Someone suggested that OpenCV VideoCapture is based on FFMPEG, which may not be optimized on ARM cpu.
NVIDIA claims that TK1 is a monster board for computer vision. But if it takes ~45 ms per frame just to load a video, it is a very slow monster. :-)
I also read some random materials on Gstreamer, which seems to use the codec chip. But I cannot find any useful sample code to do the very basic task, which is to load a video file into a program for processing.
Thanks,
Peter
================================================================================
int ProcessFile(string fn_cfg, string fn_video, int process_framerate)
{
#if GPU_TK1
gpu::VideoReader_GPU m_Cap;
#else
VideoCapture m_Cap;
#endif
int i=0;
#if GPU_TK1
m_Cap.open(fn_video);
if (m_Cap.isOpened()) {
#else
if (m_Cap.open(fn_video)) {
#endif
int key, pause_key;
key = cvWaitKey(1);
pause_key = key;
Mat m_InputMat;
#if GPU_TK1
while (m_Cap.read(m_InputMatGpu) == true && key != 'q' && pause_key != 'q') {
m_InputMatGpu.download(m_InputMat);
#else
while (m_Cap.read(m_InputMat) == true && key != ‘q’ && pause_key != ‘q’) {
#endif
Mat display;
resize(m_InputMat, display, Size(0,0), 0.4, 0.4);
cv::imshow("input", display);
key = cvWaitKey(1);
if (key==' ' || pause_key==' ')
pause_key = cvWaitKey(-1);
}
#if GPU_TK1
m_Cap.close();
#else
m_Cap.release(); // reset the video, to generate the desired results images
#endif
return 0;
}