Tensorrt inference with multi thread and multi stream

hi, Nvidia’s friends, I use multi-thread to make inferences in parallel; it is mentioned here(Developer Guide :: NVIDIA Deep Learning TensorRT Documentation) that each thread has an IExecutionContext and a Cuda Stream. Is the IExecutionContext in each thread mentioned here created by the user explicitly or will each thread automatically generate an IExecutionContext?

Does Nvidia provide this kind of examples?

Your topic was posted in the wrong category. I am moving this to the Jetson AGX category for visibility.

Which Jetson platform and JetPack you’re using?

Hi,

Is our trtexec sample can meet your requirement?

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.