I’m using TensorRT for inference execution of deep learning.
Eventually, I want to real time(over 30FPS) object recognition from the camera image.
I tried object recognition with a video file, but in the CPU decode using the OpenCV function cv::VideoCapture, the decode processing became a bottleneck.
I think that this problem improved by executing decode processing with GPU, I tried GPU decode using function cv::cudacodec::VideoReader.
But, the opencv report the following error:
OpenCV Error: The function/feature is not implemented (The called functionality is disabled for current build or platform) in throw_no_cuda, file /home/nvidia/src/opencv-3.4.0/modules/core/include/opencv2/core/private.cuda.hpp, line 111
terminate called after throwing an instance of ‘cv::Exception’
what(): /home/nvidia/src/opencv-3.4.0/modules/core/include/opencv2/core/private.cuda.hpp:111: error: (-213) The called functionality is disabled for current build or platform in function throw_no_cuda
Is there any solution?
Is there another GPU decoding method?
The environment used is below.
Jetson TX2
Cuda 9.0
opencv-3.4.0