I am very new to video processing domain. I have a task to be completed, where I have to compute qscale average of a picture. This was already implemented in ffmpeg and I am wondering how I can Implement it on gpu by using Cuda video Decoder.
The qscale average was calculated from the qscale of each pixel of a macro block in ffmpeg. I am wondering where the macro blocks were decoded in CUVID library. Also a few other questions related to the same topic are listed below.
Is cuvidDecodePicture will split the picture into macro blocks/slices to decode the picture?
How entropy used in H.264 was handled in CUVID?
Why are the CUVIDPICPARAMS has two separate structures for mpeg4 and H.264?
How to compare the process of video decoding in ffmpeg and CUVID?
Please someone help me in knowing the concept behind the qscale (Quantization Property).
Thanks in advance.