Hello everyone,
Do you have any advice on the best pipeline for achieving the lowest latency when displaying a UVC camera NV12 stream on a monitor in Windows 10?
I know NVIDIA provides the NPP, NPP-PLUS, and CV-CUDA libraries, which offer functions for NV12 → RGB conversion.
However, that’s just part of the story. I need a pipeline optimized for low latency with minimal CPU involvement.
I was thinking of using Media Foundation to parse the UVC stream and dump NV12 frames directly into the GPU’s VRAM using pinned memory, then run an NPP function to convert NV12 → RGB on the GPU.
After that, I’m a bit lost, what’s the best way to display it on the screen? Should I use a DXGI swap chain, and if so, how would that work?
Thanks!