Reserving GPU Resources for Graphics and Model Inference

I’m considering a robotics project that would use a combination of a 3D graphics UI and multi-camera/sensor model inference. What is the best way to reserve GPU resources or prioritize one over the other?

Hi,

Tasks launched to the different CUDA streams will be executed in parallel.
So you can use two CUDA streams for each task.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.