What is the best way to decode a MJPEG stream by using GPU to feed an OpenCV application?

Hi everyone,
I have wrote an OpenCV application in Qt (MSVC 2017) to decode raw frames from a MJPEG steam for further process, now I am trying to use GPU for decoding the stream. what is the best and easiest way to do so?

Hi,

You can use NVIDIA Video Codec SDK. Please download the package from NVIDIA VIDEO CODEC SDK | NVIDIA Developer. We have samples which illustrate JPEG decoding, which you can refer to.

Let us know if you need more help.

Thanks,
Ryan Park

Hi,

Thank you, I used the mentioned SDK and recompiled OpenCV with NVCUVID support. now I can use opencv gpu video decoder :).

I am trying to decode a MJPEG frame from a camera.

@pouya.ahmadvand - can you please elaborate recompiling opencv with NVCUVID ?

what are the switches/options you used? my understanding was NVCUVID was deprecated.

also, for the cuda gpu decoder you are referring to, are you using cudacodec::VideoReader ?

finally are you reading video files or reading from camera?

thanks a lot for any help.

Yes, you must download nvidia codec sdk separately and recompile opencv (4.0.1) by NVCUVID. You can follow this link :

https://github.com/opencv/opencv_contrib/pull/1946

Yes I’m using cudacodec::VideoReader to decode a camera stream

Thanks!

did you implement your own cudacodec::RawVideoSource or how are you reading from camera?

is it USB or RTSP stream or other camera sources?