gstreamer NVMM <-> opencv gpuMat

Hello World,

I’ve made a video program that reads from nvcapturesrc or from filesrc/omxh264dec process the video using ONLY cv::cuda functions and sends the processed video to a stream encoding the result with omxh264enc.

So far here are the steps of my program:

  1. open the stream using cv::VideoCapture( "... NVMM ... appsink" )
  2. retrieve each frame to a cv::Mat
  3. upload each frame to cv::cuda::gpuMat
  4. process the file using multiple cv::cuda:: functions
  5. download the result to a cv::Mat
  6. write to a stream cv::VideoWriter(" appsrc ...NVMM ..." )

From profiling the program I see that it spends most of its time copying data back and forth between the CPU and the GPU.

Is there an easy way to go straight from NVMM to gpuMat and back without copying ?

I saw that this question was asked in other topics but none have been clearly answer so far…
https://devtalk.nvidia.com/default/topic/978438/jetson-tx1/optimizing-access-to-image-data-acquired-with-nvcamerasrc
https://devtalk.nvidia.com/default/topic/1010111/nvmm-memory/

Best regards,

If you can live with one frame’s latency, you could double-buffer, by using two image matrices and ping-ponging between them.

I e, you could:

forever:
    start processing using cuda (initially, an empty image)
    capture the new image
    upload the new image
    download the finished image

Hi Foloex,
For camera input, we suggest use Argus + openCV
https://devtalk.nvidia.com/default/topic/987537/videocapture-fails-to-open-onboard-camera-l4t-24-2-1-opencv-3-1/

For decoding, we suggest use nvivafilter
https://devtalk.nvidia.com/default/topic/963123/jetson-tx1/video-mapping-on-jetson-tx1/post/4979740/#4979740

That’s a good idea to bypass the issue, thanks ;-)

I don’t have any issue with the camera, is nvcapturesrc deprecated or not recommended for you to say that ?

I’ve browsed through those post and some of the source from l4t, I still don’t understand how to go from (NVMM) to cv::cuda::gpuMat. Do you have an actual working example somewhere ?

Hi Foloex,
What is the must-have function in using cv::cuda::gpuMat? By using nvivafilter, you can do direct CUDA programming.

I need gpuMat so I can use all the OpenCV algorithms that have a Cuda implementation:

  • cudaarithm. Operations on Matrices
  • cudabgsegm. Background Segmentation
  • cudacodec. Video Encoding/Decoding
  • cudafeatures2d. Feature Detection and Description
  • cudafilters. Image Filtering
  • cudaimgproc. Image Processing
  • cudalegacy. Legacy support
  • cudaobjdetect. Object Detection
  • cudaoptflow. Optical Flow
  • cudastereo. Stereo Correspondence
  • cudawarping. Image Warping
  • cudev. Device layer

I’m not a Cuda developer but I know OpenCV quite well. So I’d rather use the already available function rather than spending time reimplementing everything (possibly less efficiently)…

Please refer to the patch based on nvsample_cudaprocess_src.tbz2 in
https://developer.nvidia.com/embedded/dlc/l4t-documentation-28-1

Verified the following command on r28.1/TX2

$ gst-launch-1.0 filesrc location= ~/Bourne_Trailer.mp4 ! qtdemux name=demux ! h264parse ! omxh264dec ! nvivafilter customer-lib-name=libnvsample_cudaprocess.so cuda-process=true ! 'video/x-raw(memory:NVMM),format=RGBA' ! nvegltransform ! nveglglessink

0001-nvsample_cudaprocess-hook-cv-gpu-gpuMat.patch.txt (3.43 KB)

From what I see in your example that would be great. I don’t think the link you provided is correct, I can only find documentation in this file. If I recall correctly this code is in l4t’s source archive.

I already looked into it but still don’t understand how to create a gpuMat from Cuda’s memory.

Hi, are you able to see attachment?

I must have missed it the first time, sorry…
Yes, that’s perfect, thanks !

Hi, I have the same exact problem, but I still did not figure out how to get the stream images to cuda::GpuMat. Can you please post a code example?

I haven’t had the time to use this code yet but it seems complete. I can post it to github once it’s running. I’ll keep you posted.

Thank you very much. Maybe @DaneLLL can share with us if they have any opencv sample codes.

There is a sample at #7. Can you see it?

Yes, but I still do not understand how to get the frame from a video of a camera into GpuMat within an openCV code. Can you provide something like this For Example ?

int main(int argc, char** argv)
{
  VideoCapture cap("whatever needs to go here");

  if (!cap.isOpened())
    {
      cout << "Failed to open camera." << endl;
      return -1;
    }

  for(;;)
    {
      cuda::GpuMat frame_gpu;
      cap >> frame_gpu;

(do cuda code on frame_gpu)
     
    }

  cap.release();
}

Hi anas.abuzaina,
Looks like yours is similar to the post:
https://devtalk.nvidia.com/default/topic/987537/jetson-tx1/videocapture-fails-to-open-onboard-camera-l4t-24-2-1-opencv-3-1/post/5064902/#5064902

Because it must be linked to appsink with ‘video/x-raw,format=BGR’ buffers, you can only get CPU buffers cv::Mat.

Thanks again, is there any work around to load the frames to GpuMat? The OpenCV gpu video reader seems to do what I want ( https://github.com/opencv/opencv/blob/master/samples/gpu/video_reader.cpp ). But it seems that cuda codec does not work on the Jetson TX2.

We don’t have the implementation. Other users may share their experience.

gpu videoreader doesn’t work on Jetson for now.
But you can use nvivafilter plugin, as mentioned by @DaneLLL.

As sample, assuming you’re using opencv4tegra installed in /usr, you may try the code below.
[EDIT: For opencv3, you may use sample files from post #23 below]
Note that I’ve all included in one src file and one makefile, and changed @DaneLLL blur filter to a Sobel filter.
[Late EDIT: This was not a good idea. Using a Sobel filter with CV_8UC4 is wrong because the alpha channel can get wrong. I don’t know why EGL backend deals with it, but using nvvidconv for converting to I420 shows a black image (using Sobel Filter on splitted R,G,B and not alpha does what is expected). Note that filter may be created only once in init function.]
Pay attention to function cv_process(), this is where you will add your processing:

File gst-custom-opencv_cudaprocess.cu

/*
 * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
 *
 * Redistribution and use in source and binary forms, with or without
 * modification, are permitted provided that the following conditions
 * are met:
 *  * Redistributions of source code must retain the above copyright
 *    notice, this list of conditions and the following disclaimer.
 *  * Redistributions in binary form must reproduce the above copyright
 *    notice, this list of conditions and the following disclaimer in the
 *    documentation and/or other materials provided with the distribution.
 *  * Neither the name of NVIDIA CORPORATION nor the names of its
 *    contributors may be used to endorse or promote products derived
 *    from this software without specific prior written permission.
 *
 * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
 * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
 * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
 * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR
 * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
 * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
 * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
 * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
 * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 */

#include <stdio.h>
#include <stdlib.h>

#include <cuda.h>
#include <opencv2/opencv.hpp>
#include <opencv2/gpu/gpu.hpp>

#include "cudaEGL.h"

#if defined(__cplusplus)
extern "C" void Handle_EGLImage (EGLImageKHR image);
extern "C" {
#endif

typedef enum {
  COLOR_FORMAT_Y8 = 0,
  COLOR_FORMAT_U8_V8,
  COLOR_FORMAT_RGBA,
  COLOR_FORMAT_NONE
} ColorFormat;

typedef struct {
  /**
  * cuda-process API
  *
  * @param image   : EGL Image to process
  * @param userPtr : point to user alloc data, should be free by user
  */
  void (*fGPUProcess) (EGLImageKHR image, void ** userPtr);

  /**
  * pre-process API
  *
  * @param sBaseAddr  : Mapped Surfaces(YUV) pointers
  * @param smemsize   : surfaces size array
  * @param swidth     : surfaces width array
  * @param sheight    : surfaces height array
  * @param spitch     : surfaces pitch array
  * @param sformat    : surfaces format array
  * @param nsurfcount : surfaces count
  * @param userPtr    : point to user alloc data, should be free by user
  */
  void (*fPreProcess)(void **sBaseAddr,
                      unsigned int *smemsize,
                      unsigned int *swidth,
                      unsigned int *sheight,
                      unsigned int *spitch,
                      ColorFormat *sformat,
                      unsigned int nsurfcount,
                      void ** userPtr);

  /**
  * post-process API
  *
  * @param sBaseAddr  : Mapped Surfaces(YUV) pointers
  * @param smemsize   : surfaces size array
  * @param swidth     : surfaces width array
  * @param sheight    : surfaces height array
  * @param spitch     : surfaces pitch array
  * @param sformat    : surfaces format array
  * @param nsurfcount : surfaces count
  * @param userPtr    : point to user alloc data, should be free by user
  */
  void (*fPostProcess)(void **sBaseAddr,
                      unsigned int *smemsize,
                      unsigned int *swidth,
                      unsigned int *sheight,
                      unsigned int *spitch,
                      ColorFormat *sformat,
                      unsigned int nsurfcount,
                      void ** userPtr);
} CustomerFunction;

void init (CustomerFunction * pFuncs);

#if defined(__cplusplus)
}
#endif

/**
  * Dummy custom pre-process API implematation.
  * It just access mapped surface userspace pointer &
  * memset with specific pattern modifying pixel-data in-place.
  *
  * @param sBaseAddr  : Mapped Surfaces pointers
  * @param smemsize   : surfaces size array
  * @param swidth     : surfaces width array
  * @param sheight    : surfaces height array
  * @param spitch     : surfaces pitch array
  * @param nsurfcount : surfaces count
  */
static void
pre_process (void **sBaseAddr,
                unsigned int *smemsize,
                unsigned int *swidth,
                unsigned int *sheight,
                unsigned int *spitch,
                ColorFormat  *sformat,
                unsigned int nsurfcount,
                void ** usrptr)
{
  /* add your custom pre-process here */
}

/**
  * Dummy custom post-process API implematation.
  * It just access mapped surface userspace pointer &
  * memset with specific pattern modifying pixel-data in-place.
  *
  * @param sBaseAddr  : Mapped Surfaces pointers
  * @param smemsize   : surfaces size array
  * @param swidth     : surfaces width array
  * @param sheight    : surfaces height array
  * @param spitch     : surfaces pitch array
  * @param nsurfcount : surfaces count
  */
static void
post_process (void **sBaseAddr,
                unsigned int *smemsize,
                unsigned int *swidth,
                unsigned int *sheight,
                unsigned int *spitch,
                ColorFormat  *sformat,
                unsigned int nsurfcount,
                void ** usrptr)
{
  /* add your custom post-process here */
}

static void cv_process(void *pdata, int32_t width, int32_t height)
{
    /* Create a GpuMat with data pointer */
    cv::gpu::GpuMat d_mat(height, width, CV_8UC4, pdata);
    
    /* Apply Sobel filter */
    cv::gpu::Sobel(d_mat, d_mat, CV_8UC4, 1, 1, 1, 1, 1, cv::BORDER_DEFAULT);
}

/**
  * Performs CUDA Operations on egl image.
  *
  * @param image : EGL image
  */
static void
gpu_process (EGLImageKHR image, void ** usrptr)
{
  CUresult status;
  CUeglFrame eglFrame;
  CUgraphicsResource pResource = NULL;

  cudaFree(0);
  status = cuGraphicsEGLRegisterImage(&pResource, image, CU_GRAPHICS_MAP_RESOURCE_FLAGS_NONE);
  if (status != CUDA_SUCCESS) {
    printf("cuGraphicsEGLRegisterImage failed : %d \n", status);
    return;
  }

  status = cuGraphicsResourceGetMappedEglFrame( &eglFrame, pResource, 0, 0);
  if (status != CUDA_SUCCESS) {
    printf ("cuGraphicsSubResourceGetMappedArray failed\n");
  }

  status = cuCtxSynchronize();
  if (status != CUDA_SUCCESS) {
    printf ("cuCtxSynchronize failed \n");
  }

  if (eglFrame.frameType == CU_EGL_FRAME_TYPE_PITCH) {
    if (eglFrame.eglColorFormat == CU_EGL_COLOR_FORMAT_RGBA) {
	/* Apply CV gpu processing */
	cv_process(eglFrame.frame.pPitch[0], eglFrame.width, eglFrame.height);
    } else {
	printf ("Invalid eglcolorformat for opencv\n");
    }
  }
  else {
     printf ("Invalid frame type for opencv\n");
  }

  status = cuCtxSynchronize();
  if (status != CUDA_SUCCESS) {
    printf ("cuCtxSynchronize failed after memcpy \n");
  }

  status = cuGraphicsUnregisterResource(pResource);
  if (status != CUDA_SUCCESS) {
    printf("cuGraphicsEGLUnRegisterResource failed: %d \n", status);
  }
}

extern "C" void
init (CustomerFunction * pFuncs)
{
  pFuncs->fPreProcess = pre_process;
  pFuncs->fGPUProcess = gpu_process;
  pFuncs->fPostProcess = post_process;
}

Now, the makefile to be saved in the same directory:

###############################################################################
#
# Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#  * Redistributions of source code must retain the above copyright
#    notice, this list of conditions and the following disclaimer.
#  * Redistributions in binary form must reproduce the above copyright
#    notice, this list of conditions and the following disclaimer in the
#    documentation and/or other materials provided with the distribution.
#  * Neither the name of NVIDIA CORPORATION nor the names of its
#    contributors may be used to endorse or promote products derived
#    from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
###############################################################################

# Location of the CUDA Toolkit
CUDA_PATH ?= /usr/local/cuda
INCLUDE_DIR = /usr/include
LIB_DIR = /usr/lib/aarch64-linux-gnu
TEGRA_LIB_DIR = /usr/lib/aarch64-linux-gnu/tegra

# This is typical install path of opencv4tegra
OPENCV_DIR = /usr

# For hardfp
#LIB_DIR = /usr/lib/arm-linux-gnueabihf
#TEGRA_LIB_DIR = /usr/lib/arm-linux-gnueabihf/tegra

OSUPPER = $(shell uname -s 2>/dev/null | tr "[:lower:]" "[:upper:]")
OSLOWER = $(shell uname -s 2>/dev/null | tr "[:upper:]" "[:lower:]")

OS_SIZE = $(shell uname -m | sed -e "s/i.86/32/" -e "s/x86_64/64/" -e "s/armv7l/32/")
OS_ARCH = $(shell uname -m | sed -e "s/i386/i686/")

GCC ?= g++
NVCC := $(CUDA_PATH)/bin/nvcc -ccbin $(GCC)

# internal flags
NVCCFLAGS   := --shared
CCFLAGS     := -fPIC
CVCCFLAGS:=-I$(OPENCV_DIR)/include
CVLDFLAGS:=-L$(OPENCV_DIR)/lib -lopencv_core -lopencv_gpu

LDFLAGS     :=

# Extra user flags
EXTRA_NVCCFLAGS   ?=
EXTRA_LDFLAGS     ?=
EXTRA_CCFLAGS     ?=

override abi := aarch64
LDFLAGS += --dynamic-linker=/lib/ld-linux-aarch64.so.1

# For hardfp
#override abi := gnueabihf
#LDFLAGS += --dynamic-linker=/lib/ld-linux-armhf.so.3
#CCFLAGS += -mfloat-abi=hard

ifeq ($(ARMv7),1)
NVCCFLAGS += -target-cpu-arch ARM
ifneq ($(TARGET_FS),)
CCFLAGS += --sysroot=$(TARGET_FS)
LDFLAGS += --sysroot=$(TARGET_FS)
LDFLAGS += -rpath-link=$(TARGET_FS)/lib
LDFLAGS += -rpath-link=$(TARGET_FS)/usr/lib
LDFLAGS += -rpath-link=$(TARGET_FS)/usr/lib/$(abi)-linux-gnu

# For hardfp
#LDFLAGS += -rpath-link=$(TARGET_FS)/usr/lib/arm-linux-$(abi)

endif
endif

# Debug build flags
dbg = 0
ifeq ($(dbg),1)
      NVCCFLAGS += -g -G
      TARGET := debug
else
      TARGET := release
endif

ALL_CCFLAGS :=
ALL_CCFLAGS += $(NVCCFLAGS)
ALL_CCFLAGS += $(EXTRA_NVCCFLAGS)
ALL_CCFLAGS += $(addprefix -Xcompiler ,$(CCFLAGS))
ALL_CCFLAGS += $(addprefix -Xcompiler ,$(EXTRA_CCFLAGS))

ALL_LDFLAGS :=
ALL_LDFLAGS += $(ALL_CCFLAGS)
ALL_LDFLAGS += $(addprefix -Xlinker ,$(LDFLAGS))
ALL_LDFLAGS += $(addprefix -Xlinker ,$(EXTRA_LDFLAGS))

# Common includes and paths for CUDA
INCLUDES  := -I./
LIBRARIES := -L$(LIB_DIR) -lEGL -lGLESv2
LIBRARIES += -L$(TEGRA_LIB_DIR) -lcuda -lrt

################################################################################

# CUDA code generation flags
ifneq ($(OS_ARCH),armv7l)
GENCODE_SM10    := -gencode arch=compute_10,code=sm_10
endif
GENCODE_SM20    := -gencode arch=compute_20,code=sm_20
GENCODE_SM30    := -gencode arch=compute_30,code=sm_30
GENCODE_SM32    := -gencode arch=compute_32,code=sm_32
GENCODE_SM35    := -gencode arch=compute_35,code=sm_35
GENCODE_SM50    := -gencode arch=compute_50,code=sm_50
GENCODE_SMXX    := -gencode arch=compute_50,code=compute_50
GENCODE_SM53    := -gencode arch=compute_53,code=compute_53  # for TX1
GENCODE_SM62    := -gencode arch=compute_62,code=compute_62  # for TX2

ifeq ($(OS_ARCH),armv7l)
GENCODE_FLAGS   ?= $(GENCODE_SM32)
else
# This only support TX1(5.3) or TX2(6.2) -like architectures
GENCODE_FLAGS   ?= $(GEGENCODE_SM53) $(GENCODE_SM62)   
endif

# Target rules
all: build

build: lib-gst-custom-opencv_cudaprocess.so

gst-custom-opencv_cudaprocess.o : gst-custom-opencv_cudaprocess.cu
	$(NVCC) $(INCLUDES) $(ALL_CCFLAGS) $(CVCCFLAGS) $(GENCODE_FLAGS) -o $@ -c $<

lib-gst-custom-opencv_cudaprocess.so : gst-custom-opencv_cudaprocess.o
	$(NVCC) $(ALL_LDFLAGS) $(CVLDFLAGS) $(GENCODE_FLAGS) -o $@ $^ $(LIBRARIES)

clean:
	rm lib-gst-custom-opencv_cudaprocess.so gst-custom-opencv_cudaprocess.o

clobber: clean

Then cd in that directory and just type :

make

for building. It should produce there a file lib-gst-custom-opencv_cudaprocess.so. This dynamic lib can be called from gst plugin nvivafilter.
In order to do that, from the same directory (or giving full path to your lib-gst-custom-opencv_cudaprocess.so), assuming using the TX1/TX2 onboard camera, just run it with gstreamer by:

gst-launch-1.0 nvcamerasrc ! 'video/x-raw(memory:NVMM), width=1280, height=720, format=I420, framerate=120/1' ! nvivafilter customer-lib-name=./lib-gst-custom-opencv_cudaprocess.so cuda-process=true ! 'video/x-raw(memory:NVMM), format=RGBA' ! nvegltransform ! nveglglessink

You should be able to see your onboard camera with sobel filter applied.

Hi Honey_Patouceul,

There could be an issue in Makefile,

/usr/local/cuda/bin/nvcc -ccbin g++ -I./  --shared  -Xcompiler -fPIC  -I/usr     /include  -gencode arch=compute_62,code=compute_62      -o gst-custom-opencv_cudaprocess.o -c gst-custom-opencv_cudaprocess.cu
ubuntu@tegra-ubuntu:~/work/FromNvidia/gpuMat$ make
/usr/local/cuda/bin/nvcc -ccbin g++ -I./  --shared  -Xcompiler -fPIC  -I/usr     /include  -gencode arch=compute_62,code=compute_62      -o gst-custom-opencv_cudaprocess.o -c gst-custom-opencv_cudaprocess.cu
nvcc fatal   : A single input file is required for a non-link phase when an outputfile is specified
Makefile:140: recipe for target 'gst-custom-opencv_cudaprocess.o' failed
make: *** [gst-custom-opencv_cudaprocess.o] Error 1

I edited the command line given by Makefile - to say -I/usr/include, then command line seems to work

ubuntu@tegra-ubuntu:~/work/FromNvidia/gpuMat$ /usr/local/cuda/bin/nvcc -ccbin g++ -I./  --shared  -Xcompiler -fPIC  -I/usr/include  -gencode arch=compute_62,code=compute_62      -o gst-custom-opencv_cudaprocess.o -c gst-custom-opencv_cudaprocess.cu

However I get error …which I guess, is related to old opencv version

ubuntu@tegra-ubuntu:~/work/FromNvidia/gpuMat$ /usr/local/cuda/bin/nvcc -ccbin g++ -I./  --shared  -Xcompiler -fPIC  -I/usr/include  -gencode arch=compute_62,code=compute_62      -o gst-custom-opencv_cudaprocess.o -c gst-custom-opencv_cudaprocess.cu
/usr/include/opencv2/gpu/gpu.hpp(438): error: vector is not a template

/usr/include/opencv2/gpu/gpu.hpp(444): error: vector is not a template

/usr/include/opencv2/gpu/gpu.hpp(1271): error: vector is not a template

/usr/include/opencv2/gpu/gpu.hpp(1272): error: vector is not a template

/usr/include/opencv2/gpu/gpu.hpp(1273): error: vector is not a template

/usr/include/opencv2/gpu/gpu.hpp(1291): error: vector is not a template

/usr/include/opencv2/gpu/gpu.hpp(1293): error: vector is not a template

/usr/include/opencv2/gpu/gpu.hpp(1294): error: vector is not a template

/usr/include/opencv2/gpu/gpu.hpp(1295): error: vector is not a template

/usr/include/opencv2/gpu/gpu.hpp(1297): error: vector is not a template

/usr/include/opencv2/gpu/gpu.hpp(1301): error: vector is not a template

/usr/include/opencv2/gpu/gpu.hpp(1306): error: vector is not a template

/usr/include/opencv2/gpu/gpu.hpp(1307): error: vector is not a template

/usr/include/opencv2/gpu/gpu.hpp(1307): error: vector is not a template

/usr/include/opencv2/gpu/gpu.hpp(1309): error: vector is not a template

/usr/include/opencv2/gpu/gpu.hpp(1311): error: vector is not a template

/usr/include/opencv2/gpu/gpu.hpp(1841): error: vector is not a template

/usr/include/opencv2/gpu/gpu.hpp(1842): error: vector is not a template

/usr/include/opencv2/gpu/gpu.hpp(1844): error: vector is not a template

/usr/include/opencv2/gpu/gpu.hpp(1845): error: vector is not a template