Segmentation fault on Running multiple processes with same deepstream pipeline in a container

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Tesla V100-PCIE
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.4.1.5
• NVIDIA GPU Driver Version (valid for GPU only) 470.129.06
• Issue Type( questions, new requirements, bugs) Segmentation Fault
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) Running multiple processes in parallel using the same deep stream python pipeline

Hi We are trying to utilise all the GPU RAM available and would like to have multiple AI Job workers which will accept jobs using a queue. We have created 5 different processes using the same deep stream pipeline all of them listening on a queue for a job. When a job is available one of the process will get the job and the inferencing will happen on that, if there are more jobs other idle pipeline processes will get the job and they start working up on it in parallel.

However we saw that the processes are crashing with segmentation fault when they are running in parallel and doing inferencing at the same time. Would like to understand if there is a work around running multiple deepstream pipeline processes in parallel.
Please note that the GPU RAM and CPU RAM both were monitored during the testing none of them even reached 60% their maximum capacity before getting the segmentation fault, so it definitely is not related to RAM is getting overloaded. Request your help on this.

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

the information is not enough, please provide simplified code to reproduce this issue, including input and configuration file .