If I alloc input frames using nvEncCreateInputBuffer, do I need to allocate a bitstream object for each one? I noticed code like that in ffmpeg that seems to allocate them together, n times, but it wasn’t clear at all in the documentation. I’m not using async encoding as it’s not supported for datacentre cards (or at least the one I’m using) so I assumed I only needed a single bitstream buffer.