Yolov7 inferencing using multiprocess and tensorrt

we are running yolov7 tensorrt engine for inferencing. If we are running it without multiprocessing it is working fine. But when we are running it in a process using multiprocessing and shared queue of python, it shows error as shown in shared screenshot.

Docker image: nvcr.io/nvidia/tensorrt: 23.03-py3

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered


The below links might be useful for you.


For multi-threading/streaming, will suggest you to use Deepstream or TRITON

For more details, we recommend you raise the query in Deepstream forum.


raise the query in Triton Inference Server Github instance issues section.