Particularly for 4:2:0 h.264 encoding - 1080i50, for instance (just so I have values for the ideas below).
I understand everyone wants to get rid of interlaced video - I do, too. But that’s just not an option for some situations.
I don’t want to do double the work I should be doing just because it’s interlaced video.
Some simple solutions and their apparent drawbacks:
-
Treating 1080i50 as 1080p25 progressive video - 4:2:0 “color bleed” and “cross-field/temporal” sampling errors during subject motion and pans (444 should make this better as well, but at increased cost)
-
Deinterlacing before encoding to produce 1080p50 - addresses 4:2:0 “color bleed” and “cross-field” sampling, but very expensive (deinterlacing cost plus 2x the gpu cycles to encode).
-
Modifying the layout of the fields (not interlaced - one field on top, the other on bottom) and treating as 1080p25 progressive video - encoding of each field should be cleaner, but then traditional playback tools can’t use video without re-encoding (true for other variations of this approach). (sdk 12 video slices in AVC encoder would work well with this approach).
-
Two video streams - one 1080x540p25 for each field - better, but standard playback tools would show a 1/2 rate progressive output since they would play back only one of the streams.
It’s pretty frustrating that nvEncode discontinued interlaced encoding while nvDecode still offers cudaVideoDeinterlaceMode_Adaptive.
Am I missing something obvious? Any other clever ideas?