Segmentation fault (cored dumped) when using TensorRT with multithreading

Description

I am trying to follow a previous post to use Tensor RT with multithreading.https://forums.developer.nvidia.com/t/how-to-use-tensorrt-by-the-multi-threading-package-of-python/123085/8

However, I keep getting segmentation fault (core dumped) error message. It seems to me the solution provided worked fine for the previous post. But it does not seems to be working for me. May I know why is that so?

Environment

TensorRT Version: 7.1
GPU Type: Nvidia RTX 3080
Nvidia Driver Version: 460.71.01
CUDA Version: 11.2
CUDNN Version: 8
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3.6

Hi,
The below link might be useful for you
https://docs.nvidia.com/deeplearning/tensorrt/best-practices/index.html#thread-safety
https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#stream-priorities
https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__STREAM.html
For multi threading/streaming, will suggest you to use Deepstream or TRITON
For more details, we recommend you to raise the query to the Deepstream or TRITON forum.

Thanks!