Has anyone figured out a clever solution for encoding interlaced video with nvEncode?

Particularly for 4:2:0 h.264 encoding - 1080i50, for instance (just so I have values for the ideas below).

I understand everyone wants to get rid of interlaced video - I do, too. But that’s just not an option for some situations.

I don’t want to do double the work I should be doing just because it’s interlaced video.

Some simple solutions and their apparent drawbacks:

  1. Treating 1080i50 as 1080p25 progressive video - 4:2:0 “color bleed” and “cross-field/temporal” sampling errors during subject motion and pans (444 should make this better as well, but at increased cost)

  2. Deinterlacing before encoding to produce 1080p50 - addresses 4:2:0 “color bleed” and “cross-field” sampling, but very expensive (deinterlacing cost plus 2x the gpu cycles to encode).

  3. Modifying the layout of the fields (not interlaced - one field on top, the other on bottom) and treating as 1080p25 progressive video - encoding of each field should be cleaner, but then traditional playback tools can’t use video without re-encoding (true for other variations of this approach). (sdk 12 video slices in AVC encoder would work well with this approach).

  4. Two video streams - one 1080x540p25 for each field - better, but standard playback tools would show a 1/2 rate progressive output since they would play back only one of the streams.

It’s pretty frustrating that nvEncode discontinued interlaced encoding while nvDecode still offers cudaVideoDeinterlaceMode_Adaptive.

Am I missing something obvious? Any other clever ideas?

If you deinterlace to produce 1080p50 you can always drop all even (or all odd) frames to go back to 1080p25 and then have only “1x GPU cycles” (whatever you mean by that) to encode it.

Seriously, the only time you want to encode interlaced conent is for displaying it on old TV sets and old hardware video players that do not support progressive video. If you are doing it for any other reason then you are doing it wrong.

Good ideas, and I agree with your second point in most situations. Unfortunately there are still major broadcast networks in the us (eg, nbc, cbs, tnt) and lots of regionals (and international broadcasters) that produce live content (which is our market) in 1080i. We are also required to reproduce each individual moment in time for 1080i50 or 1080i5994 for particular applications, which pushes against deinterlacing in the way you propose - but it is a good idea for lots of other applications, I would think.

BUT I truly hate interlaced video and would be delighted to only handle progressive video. Maybe some day! It’s just all these pesky clients that keep paying us …

Thanks for taking the time to reply!

Oh, and what I meant by “2x the gpu encode cycles” is that the encoding chip can do ~800 fps (for a particular profile), so if I go to p50 from i50, I am handling twice the number of frames, so I can only handle half as many feeds on each gpu.

I see.

As someone who was supporting Windows 7 clients until recently you have my sympathy.

I too hate interlaced video, and I wish youtube would reject the upload when it detects interlacing (not by media flags but by actually analyzing frames) with a nice little page telling people to upgrade their process.

Youtube already as good as rejects interlaced video, since instead of doing 50 fps video for 25i it does 25 fps, which loses almost half of motion.

That’s why I said it’s best to convert to progressive on your own so you are in control of the conversion quality.

Well, not sure about that. Nearly all TV stations in the USA today brodcast 1080i (interlaced) only. Fox stations do 720p.

Not really. Master streams are all progressive 60p now.

Not in live sports broadcasting … stop by a tv truck this weekend. Fox and espn are always progressive. But cbs, nbc, turner, and lots of regionals frequently shoot and produce in 1080i. Folks are moving away from it, but it will be years before everyone is 1080p or better.