Yolov7 inferencing using multiprocess and tensorrt

Hi,
we are running yolov7 tensorrt engine for inferencing. If we are running it without multiprocessing it is working fine. But when we are running it in a process using multiprocessing and shared queue of python, it shows error as shown in shared screenshot.

#enviroment
Docker image: nvcr.io/nvidia/tensorrt: 23.03-py3
tensorrt: 8.5.3.1
Cuda:12.0

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,

The below links might be useful for you.

https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__STREAM.html

For multi-threading/streaming, will suggest you to use Deepstream or TRITON

For more details, we recommend you raise the query in Deepstream forum.

or

raise the query in Triton Inference Server Github instance issues section.

Thanks!