So here is the situation:
- There is a remotely controlled robot that has a camera. The camera takes 20 Image per second. Each image has a resolution of 4000x3000, the format is BMP. Each image size is circa 36 Mbyte.
- The connection between the Robot and the Server is cutting edge. Latency is always under 20 ms and there are no spikes at all. It is as reliable as it gets. there is virtually no fear of loosing a single bit due to congestion or temporary loss of connectioin (Unless server or Robot crushes).
- We are talking about a stream of circa 700 Mbyte per second here.
- The screen on which the video is shown is only Full HD 1920x1080. downscaling must happen and prefereably as early as possible to evade loosing unecessary processing power.
- FFMPEG fetches those images and encode them to a H265 or H264. The end user can choose different parameters but mostly he will pick the lossless (aka preset 11 from my understanding) most of the time.
- FFMPEG then will stream this to one WebApp that resides on the same server. it uses Mern stack
- On a quadro P3200 i get 60 fps with hardware accelaration and only 8 FPS using CPU. The GPU Usage is 50%. I m not sure if the bottleneck is in the GPU or in SSD (Since i m using stored BMP files as tests). I have a Samsung 970 PRO Nvme
- The budget is 2000$. Minimal Power consumption must also be taken into consideartation.