Thanks to latest SDK release 7.0 I could use my GTX 1060 to encode in 4:4:4. But I’m frustrated because I can’t decode it!
My CPU isn’t fast enough to play the required number of FPS.
Is there a HW limitation so that there won’t be any SDK release that will allow such decoding?
Is this not in the roadmap because it has not enough market opportunities?
I’m not professional but very interested in HW decoding of streams directly from satellites (SATs) - especially in 4:2:2 HW decoding. My 3-core AMD Athlon II X3 460 3.4 GHz (Rana core) enough for software decoding of h264 1920x1080i50 4:2:2 streams with 45-70% load of each core but for 20180428-175510_4K ENC 3 RMAD VS LEG.ts (563MB) 20180428-175510_4K ENC 3 RMAD VS LEG.ts — Yandex.Disk my CPU isn’t enough - it can decode just ~1fps and just expancive Core i7 or Ryzen can play that file fluently.
So according to [url]https://developer.nvidia.com/nvidia-video-codec-sdk#NVDECFeatures[/url] just newest Turing now RTX2080(Ti) or RTX2070 only can HW decode in the centre of https://devblogs.nvidia.com/nvidia-turing-architecture-in-depth/HEVC YUV444 10/12b HDR at 30 fps, H.264 8K, and VP9 10/12b HDR but only after * The Video Codec SDK, which exposes new decoder improvements and features of Turing will be released soon.
Historically that’s logical - VLD decoding of HEVC has appeared just in 2-nd generation Maxwell while encoding much earlier.
I hope ** 4:2:2 is not natively supported on HW not for Turing too because I’m sure 4:4:4 is harder than 4:2:2.