[QST] Best practices of handling competence of gpu resource?


As Jetson devices are resource constrained devices, sometimes competing for resources happens in the context of gpu usage.

For example, process A(a thrust based pointcloud processing node) and B(a tensorrt based inference node) both use more than a half of gpu cores.

Process A can be blocked by B, so processing time/latency increases a lot, which is unacceptable when A is time-critical.

So my thoughts of this problem is about two ideas.

  1. Implement realtime setup to A to get higher priority and sched policy
  2. Limit gpu usage in the context of thrust and tensorrt

My questions are:

  1. Is it feasible for gpu programes with realtime setup?
  2. How to limit gpu usage of tensorrt, even in thrust?

I have checked NVIDIA/nvidia-docker#1059 and it seems there is no good idea for jetson devices.



jetson xavier with jetpack 4.4.1

BTW, This is an issue from [QST] How to limit gpu usage in the context of tensorrt? · Issue #1210 · dusty-nv/jetson-inference · GitHub

Any help would be appreciated.

Hi @zhensheng,

Stream priority may improve the situation. Real-time guarantees are hard to provide though.
We recommend you to please post your concern on Jetson forum to get better help.

Thank you.

1 Like