Encoder reset

What do I need to do to reset the encoder. I mean reset like when an encoder is generating data, inter-frames predicted, and then a new “client” joins in, I would like to enforce the encoder to produce “resetted brand new data” that the new client could understand and decode.
Setting:
picParams.encodePicFlags = NV_ENC_PIC_FLAG_FORCEIDR;
doesn’t really work. It does enforce big intra-predicted frame but that frame is a few bytes smaller than the very first frame that the encoder produces. I would need a way to make the encoder “reset” so that at some point it can produce the exact same frame as it produces for the very first frame.

I would expect that when I set gopLength to, say 5, then when a client starts receiving data from frame 2nd then obviously it can’t read that, but once it reads frame 5th, or 10th, etc., it can easily decode but it has whole-frame data with header. Unfortunately right now a client doesn’t receive the very first stream’s frame then it can’t decode any of the following frames.

Hi maxest,

Could you elaborate a bit more on what you mean by “reset encoder”? What do you mean by “resetted brand new data”?

We aren’t sure if this would be any of help, but could you check NVENCODEAPI “NvEncReconfigureEncoder”. It’s explained in chapter 8.3 of “\doc\ NVENC_VideoEncoder_API_ProgGuide” in Video Codec SDK SDK. Please check if that could be of any help to you.

Thanks,
Ryan Park

I actually did try NvEncReconfigureEncoder function but it didn’t help.
Let’s say for example that I create an encoder with GOP=5. Now I encode and read encoded data. These are (exemplary) data sizes (in bytes) per each frame (where the content stays the same for all frames):

100032 <- frame 0
50000 <- frame 1
10000 <- frame 2
150 <- frame 3
150 <- frame 4
100000 <- frame 5
50000 <- frame 6
10000 <- frame 7
150 <- frame 8
150 <- frame 9
100000 <- frame 10
50000 <- frame 11
10000 <- frame 12
150 <- frame 13
150 <- frame 14

As you can see every 5 frames there is a “reset” of data, so that each group of 5 images is encoded independently. However, please note that frame 0 is a little bit bigger than frame 5 and frame 10. Now if a receiver reads frame 0 and subsequent frames all frames are correctly decoded. However, I would expect that when a decoder starts reading from frame 5 it can also properly decode the data. This is not the case however! It turns out that frame 0 has some data in the first 32 bytes that are necessary for a decoder to even start decoding. This data is not present in any other frame, neither in 5th nor in 10th. So the decoder can’t decode data if it doesn’t have that “header” or sth from frame 0.

I thought that NvEncReconfigureEncoder called after 4th and 9th frames would enforce the encoder to generate, for frames 5th and 10th, the same “header” as for frame 0. I also thought that setting GOP=5 alone would be enough. However, no matter how I try to “reset” the encoder I can’t get away without reading the very first 0th frame. The only workaround I found for this problem that I am currently using is to simply destroy the encoder with nvEncDestroyEncoder and create it with nvEncOpenEncodeSessionEx/nvEncInitializeEncoder. I am pretty sure though that there is a more optimal way to do this. I don’t want to literally destroy and create an encoder everytime a new user connects (and I have to do that to make sure the new user will get data they can decode).

Actually I would still be happy with an answer :).
What is GOP for if I can’t decode any frame, regardless of GOP size, if I have not decoded frame 0 first?

Hi maxest,

I’m currently looking for something similar. I’ve just started working with NVENC, and I need a way to make a video of only N last seconds of encoded frames sequence while avoid keeping the whole sequence.

For now I’ve got to the point, where I’m using gopLength = INFINITY and encoding every M-th frame with flags NV_ENC_PIC_FLAG_FORCEIDR | NV_ENC_PIC_FLAG_OUTPUT_SPSPPS | NV_ENC_PIC_FLAG_FORCEINTRA. Additionally, when I’m encoding M-th frame, if sequence got too big, I erase first M frames. This way the video always starts from correctly encoded frame. If you are using gotLength other than INFINITY, I think setting repeatSPSPPS to 1 should do the same job for you.

However, with my solution I’ve got other issue. For some reason when I’m inserting IDR frame the video “stutter”. And when I’m making every frame like that the video becames a lot faster than it should. I though I was loosing some frames, which led to lower framerate, than video has. I’ve made an assumption, that this is because of encoding IDR frame takes a lot longer, than regular frame. So I’ve set nExtraOutputDelay of NvEncoder (I’m using framework from SDK sample code) to number of frames, equal to 5 seconds. This way if frame will be encoded in 5 seconds of less it will be already ready waiting for me to read it. However, it did not help me. So still looking for a solution.

Hope it will help you.