Multi thread Inference tensorrt python

TensorRT Version: 7.1.1
Operating System + Version: jetson jetpack 4.4 Xavier AGX
Python Version (if applicable): 3.6

how to run mutiple tensorrt engines on multithread python. I created engine in one thread and used different execution’s context in worker threads for each camera but got Cudnn Error in configure: 7 (CUDNN_STATUS_MAPPING_ERROR).

In worker thread I have used cfx.push() and cfx.pop (cfx is cuda.Device(0).make_context()) but still got (CUDNN_STATUS_MAPPING_ERROR).

Hi, This looks like a Jetson issue. We recommend you to raise it to the respective platform from the below link

Thanks!