[TensorRT] engine happed a error in multithreaded

Tks!

I found the TensorRT is Thread Safety

The TensorRT builder may only be used by one thread at a time. If you need to run multiple builds simultaneously, you will need to create multiple builders.
The TensorRT runtime can be used by multiple threads simultaneously, so long as each object uses a different execution context.
Note: Plugins are shared at the engine level, not the execution context level, and thus plugins which may be used simultaneously by multiple threads need to manage their resources in a thread-safe manner. This is however not required for plugins based on IPluginV2Ext and derivative interfaces since we clone these plugins when ExecutionContext is created.
The TensorRT library pointer to the logger is a singleton within the library. If using multiple builder or runtime objects, use the same logger, and ensure that it is thread-safe.

So, I change the sever in single thread way and it works well.

And refer

But when I try to use it in multithreaded way, it asks App to use the different GPU mem.

It didn’t work and send the same result.

./receive230116022455/ac_receive.jpg
[01/16/2023-06:04:08] [TRT] [E] 1: [convolutionRunner.cpp::execute::391] Error Code 1: Cask (Cask convolution execution)
[01/16/2023-06:04:08] [TRT] [E] 1: [checkMacros.cpp::catchCudaError::272] Error Code 1: Cuda Runtime (invalid resource handle)
['./receive230116022455/ac_receive.jpg', array([0., 0., 0., 0.], dtype=float32), 'NG1']