I want to create a python multiprocessing instead of a thread as in the following forum thread
Purpose: So far I need to put the TensorRT in the second threading.
I have read
this document but I still have no idea how to exactly do TensorRT part on python.
I already have a sample which can successfully run on TRT.
Now I just want to run a really simple multi-threading code with TensorRT.
(I have done to generate the TensorRT engine, so I will load an engine and do TensorRT inference by multi-threading.)
Here is my code below. (Without the Tensorrt code)
I tried to replace the 'class myThread(threading.Thread)'to :
init(self, func, args):
self.func = func
self.args = args
when I run the test.py ,Im getting the below error :
pycuda._driver.LogicError: cuCtxPopCurrent failed: initialization error
at self.cfx.push() in the infer function of my_tensorrt_code
Nvidia Driver Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Exact steps/commands to build your repro
Exact steps/commands to run your repro
Full traceback of errors encountered
The below link might be useful for you
For multi threading/streaming, will suggest you to use Deepstream or TRITON
For more details, we recommend you to raise the query to the Deepstream or TRITON forum.
Discussions about the DeepStream SDK
Thank you for the info .I will def look into deepstream ,but I have a project Im working on and need to deliver quickly .I just want to know how to modify the code in the thread to run on python multiprocessing.