I tried to enhance some videos using the VideoEffects and UpscalePipeline apps. Although it ran super quickly (1x speed or better), the quality of the output was low, in fact usually worse than the input. I tried various combinations, such as SuperRes alone, SuperRes + ArtifactReduction, SuperRes in mode 0 and 1, etc, and tested it on both low-quality inputs and high-quality inputs.
Is there a way to optimize it for quality, at the expense of speed? For example would it produce better results if I split the video into images for each frame and then ran SuperRes on each frame?
Hi Chris, what kind of content are you feeding into the SDK? These are general purpose models, so should work well on computer graphic and video content. We have seen that in some cases, if the video is very noisy, it helps a lot to run the denoiser prior to running Super Resolution. This is part of the video effects SDK as well.
You should certainly see a difference in terms of clarity and detail on whatever video you process. It should not make a difference if you split up the video. As long as the input resolution is supported, it will bring in the frames at the appropriate resolution and put them out at the scale specified.
Are you re-encoding the video after processing? If so, this could possible be the culprit, depending on your encode settings.