Is TensorRT CudaEngine thread safe ?

Hi all,

I created an engine as following in the main thread:
//------------------------------------------------------
IBuilder* builder = createInferBuilder(gLogger);
INetworkDefinition* network = builder->createNetwork();

ICudaEngine* engine = builder->buildCudaEngine(*network);

I want to do inference in multi threads, is above engine thread safe?

Regards,
Yan

I have exactly the same question.
Looking forward to any suggestion !

See https://docs.nvidia.com/deeplearning/sdk/tensorrt-best-practices/index.html#thread-safety