How to share tensorrt between processes

My problem is simillar with cuDNN take up too much GPU memory, when I start two trtexec process, they occupy almost same memory, but according to the answer, tensorrt or cudnn dynamic library could share between processes, is there anything we should set?


May I know which model do you use first?

Please note that except for the cuDNN library, TensorRT will allocate some memory as working space.
Every model will have its own working space and it cannot be shared.


two processes use same model, if cudnn library can be shared, the second process should use about 600MB less than the first process, but they use almost same memory


Thanks for your report.

We are checking this issue internally.
Will share more information with you later.

Thanks, is there any progress


Thanks for your patience.
We have got some information about this issue.

Please note that the CUDA context on Jetson is created per process.
Since cuDNN/TensorRT context is based on CUDA context, sharing context across different processes is not available.

On Jetson, it’s more recommended to use multi-thread (different model) or TensorRT batch (same model) instead.


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.