cudaencode SDK example - NVEncodeFrame : real-time encoding & streaming How to encode YUV-->


I have been playing around with the cudaencode example included in the 3.2 SDK. It takes a YUV (uncompressed video) file as input and writes out a raw H.264 stream (or can be VC1 also). The work is performed by the “NVEncodeFrame” function that is part of the Encoder API. The program takes a FILE object (stdio.h) as the output to where the result of the encoding (the compressed H.264 video) is written. Now I want to stream the video, as it is being encoded, into the network (say, using a simple TCP/UDP client).

My specific questions are:

  1. Is there a way to tell NVEncodeFrame to flush the compressed video regularly (say, every few encoded GOPs)

  2. Is there someway to have NVEncodeFrame write to a network socket stream instead of a FILE object?

Thanks in advance!


Ok…I 've been doing some thinking to work with what I see here. One way around the problem is to divide up the input (YUV) uncompressed frames into batches of say, few GOPs each and then run the encoder repeatedly for each of these batches. It should be possible to combine the resulting bunch of H.264 encoded files in a container format. Or each H.264 file can be streamed one after the other.

Still, this is at best a hack. Can someone think of a good way of having the Encoder API make the encoded video into a (real-time) stream instead of it saving files?




I’m working on the Cuda Encoder. Is there any possibility to save the file to .H264, to play it with windows media player or other player?