Can multiple cudaStream instances share the same tensorrt execution context?

Description

We develop C++ application to run 2 trt engines in multiple threads. To achieve better throughput, we use 2 cudaStream instances for each engine; can these 2 cudaStreams share one tensorrt execution context? Or we need create different tensorrt execution contexts for different cudaStreams. Which one is the right usage of cudaStream and tensorrt execution context.

Environment

TensorRT Version: 8.4.3.1

GPU Type: RTX3090

Nvidia Driver Version: 470.57

CUDA Version: 11.4.2

Operating System + Version: Ubuntu 20.04

Hi @tjliupeng ,
I am checking on this with the Engineering team and shall update.

Thanks

Any update? @AakankshaS