Redundant video encoding

Hi, I’m working on game streaming and try to find redundant video encoding solution which should improve video quality on WiFi with big amount of packet loss. Is there any nvenc function which should help?

Thanks in advance

Yes, for massive streaming. You can create more encoder sessions with different parameters and different scaled input image. But be aware - with customer grade (GTX/RTX) and lowend Quadro only 2 encoder sessions are allowed per whole system [https://developer.nvidia.com/video-encode-decode-gpu-support-matrix#Encoder]. Video streams can be broadcasted and multiplexed using HLS (https://tools.ietf.org/html/rfc8216) or similar technologies.

Yes, for point-to-point streaming. You can create some feedback in your streaming protocol to change stream FPS and encoder parameters (see NvEncReconfigureEncoder() API). For example check some “professional protocols” like “HDX Adaptive Transport”, “Enlightened Data Transport”…

If you want to protect low latency streaming over WiFi or any other lossy networks we use ->
ElasticFrameProtocol for framing the NAL, ADTS (in our case) and AUX data.
https://bitbucket.org/unitxtra/efp/src/master/

Then as transport we use SRT or RIST.
SRT Example ->
https://bitbucket.org/unitxtra/cppsrtframingexample/src/master/
RIST Example ->
https://bitbucket.org/unitxtra/cppristframingexample/src/master/

RIST is WIP by the VideoLan guys right now so the build might be broken. There are new commits all the time and the API’s are not yet stable.

The above proposed protocols use ARQ and you can set a time-limit meaning you can control the maximum delay you allow for. However a lower delay setting also covers for less loss, you need to set the levels that works for your application.

/Anders