Can multiple cudaStream instances share the same tensorrt execution context?


We develop C++ application to run 2 trt engines in multiple threads. To achieve better throughput, we use 2 cudaStream instances for each engine; can these 2 cudaStreams share one tensorrt execution context? Or we need create different tensorrt execution contexts for different cudaStreams. Which one is the right usage of cudaStream and tensorrt execution context.


TensorRT Version:

GPU Type: RTX3090

Nvidia Driver Version: 470.57

CUDA Version: 11.4.2

Operating System + Version: Ubuntu 20.04

Hi @tjliupeng ,
I am checking on this with the Engineering team and shall update.


Any update? @AakankshaS