Send inference result of a frame in deepstream pipeline to a second program which is running simultaniously on the same machine

Is there any way to send the information of detected objects via YOLO(in deepstream pipeline) for example bounding box coordinates, confidence acore, etc. to a second program on the same machine? I know it is possible to use metadata to send the data to the Azure cloud (for example), but what about the local machine itself?

Hi @MGh,
Maybe ipcpipeline can be used for your case.
Note, if you run CUDA work in two processes for one GPU, it may degrade the GPU work performance since the CUDA work in two processes will run on two CUDA contexts by default and can’t run in parallel on one GPU.

Thanks!

1 Like

Hi,
Aha, good points, thank you.

You can put the metadata in a queue or something, send it to a separate process, and work on it there, sure. You could also use nvidia’s network broker elements to send data to something running on the same machine. You just can’t do any work on the same GPU DeepStream is using without hurting performance. You also can’t easily send anything back without either waiting for it somwhere down the pipeline or at the next frame/batch.

1 Like

Great, Thanks a lot.