Placa RTX 3060TI upgrade para RTX 3080TI, valeria o investimento para leitura de OCR com 8 câmeras ligadas?

Estou trabalhando em um projeto no qual tenho 8 câmeras ligadas em uma placa RTX 3060TI, pc com 100 gb de memória, conexão via switch gigalan, estou com um delay de 9 segundos, queria saber se trocar para a RTX 3080TI, vai me ajudar em algo, o sistema é desenvolvido em python, Yolo e CUDA.
ou seria outra coisa no hardware que faz este delay ser grande.

Hi @ricardo27,

Can you explain in a bit more detail? What is the rest of the system setup?

  • How are you reading from the cameras? HDMI with some capture cards or USB?
  • Where exactly do you see the delay?
  • What is the GPU doing exactly, is it encoding the camera streams to something else?

To compare the 3060TI vs 3080TI we need to know what feature of the GPU you are pushing to the limit with your project.

Thanks!

Hello Mark, how are you? I hope well…
Thanks for answering.
Can you explain a little more detail? What is the rest of the system configuration?
-Os Linux Ubuntu 18.04 , CAT6 network cable (gigalan)
How are you reading the cameras? HDMI with some capture cards or USB?
-POE cable passing through the switch - 5 meters
Where exactly do you see the delay?
after seeing the license plate on the monitor, 9 seconds later the system informs me
What is the GPU doing exactly, is it encoding the camera streams to something else?
-is running a natural network for real-time object detection on all camera streams
the gpu directly handles the image that is sent by the camera
does not collect the images from the camera, it just applies mathematical operations on them to arrive at the detection result
To compare the 3060TI vs 3080TI, we need to know which GPU feature you are pushing to the limit with your project.
-I realized that it won’t be interesting to change the board, because the difference is minimal, correct me if I’m wrong
Thanks

Thanks for the details!

One thing causing the delay might be the process that decodes the camera streams into image data for the GPU, you should look at the latency of that part of your pipeline.
If you send raw video data to the GPU, then it first has to decode that, which will introduce a slight delay, but not in teh range of several seconds.
So it likely might be the process of object detection.
What is the resolution of the images? I would be surprised if an object detection inference on 8 images would take several seconds, even on a 3060TI. You should look into our NSight tools to see if you can analyse this further.

Lastly, if you check out the comparison table for example on GeForce 30 series - Wikipedia, there is quite a significant difference between 3060TI and 3080TI. The 3080TI has double the amount of Tensor cores and as such double the amount of TFLOPS across the board for Deep Learning tasks.