Resize RGB before NVENC

I’m a new starter with NVIDIA SDKs, and I have to use NVENV to encode RGB32 video frames from system memory. The AppEncode sample in the Codec SDK seems a good starting point, and NVENC seems to allow RGB32 as input, so that’s great.

But I also need to be able to resize the RGB32 frames immediately before they are input to the encoder. There doesn’t seem to be any resizing capability in the actual encoder - there does seem to be a couple of resize functions in the “utils/” file but these only seem to be YUV formats, not RGB32.

What is the best way of resizing RGB32 in hardware, for direct input to NVENC?

This is how I resize RGB ->
nppStatus = nppiResize_8u_C3R(dDst, widthSource * 3, size, myNppiRectSource, dDstResize, widthDest * 3, sizeDest,
myNppiRectDest, NPPI_INTER_CUBIC);

There is also a nppiResize_8u_C4R if you got four components like RGBA



I had thought I would be using NVENC with DX11, but if I need to use nppi functions then I’ll probably have to use CUDA instead. That’s OK, the NVENC samples look very similar for CUDA and DX11 surfaces.

Occasionally I will have to coped with interlaced video - so RGB frames comprising 2 interlaced fields. Interlaced video can be tricky to resize if you need to retain the interlaced nature (which I do).

Are there any functions that would resize interlaced frames? If not, I would have to separate the fields into 2 surfaces (half the original height), then resize the 2 independently, then recombine the 2 to produce the resized interlaced frame. Are there any functions that would do field separation and combining like this?