Error in cuda when trying to inference via multiprocessing

Hello ,

I want to create a python multiprocessing instead of a thread as in the following forum thread

I tried to replace the 'class myThread(threading.Thread)'to :

class myThread(multiprocessing.Process):

def init(self, func, args):
multiprocessing.Process.init(self)
self.func = func
self.args = args

when I run the test.py ,Im getting the below error :

pycuda._driver.LogicError: cuCtxPopCurrent failed: initialization error

at self.cfx.push() in the infer function of my_tensorrt_code

Thanks
Ayad

Environment

TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,
The below link might be useful for you
https://docs.nvidia.com/deeplearning/tensorrt/best-practices/index.html#thread-safety

https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__STREAM.html
For multi threading/streaming, will suggest you to use Deepstream or TRITON
For more details, we recommend you to raise the query to the Deepstream or TRITON forum.

Thanks!

Thank you for the info .I will def look into deepstream ,but I have a project Im working on and need to deliver quickly .I just want to know how to modify the code in the thread to run on python multiprocessing.