Error in cuda when trying to inference via multiprocessing

Hello ,

I want to create a python multiprocessing instead of a thread as in the following forum thread

I tried to replace the 'class myThread(threading.Thread)'to :

class myThread(multiprocessing.Process):

def init(self, func, args):
self.func = func
self.args = args

when I run the ,Im getting the below error :

pycuda._driver.LogicError: cuCtxPopCurrent failed: initialization error

at self.cfx.push() in the infer function of my_tensorrt_code



TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

The below link might be useful for you
For multi threading/streaming, will suggest you to use Deepstream or TRITON
For more details, we recommend you to raise the query to the Deepstream or TRITON forum.


Thank you for the info .I will def look into deepstream ,but I have a project Im working on and need to deliver quickly .I just want to know how to modify the code in the thread to run on python multiprocessing.