How can two deepstream applications communicate using NV12buffer and meta data?

Hi I’m quite new to deepstream. Please guide me on the below query.

I am trying to divide an IVA use case in multiple parts.
Pre-processing - Inferencing - Draw/Display

I would like to have three different deepstream applications to run for the three logical parts of a use case.

Can you guide me how can I proceed ahead on this approach ?
I’m grouping a couple of pipelines together as one application, rather than to have all plugins under one pipeline and under one application.
Is there a way to communicate NV12/GstBuffer with meta data across deepstream applications ?

Hey, customer
I think normal linux IPC can be used for your use case.
But why you must need to use three apps to achieve your goal, also even if you really need multiple apps, it’s a bad idea to copy NV12/GstBuffer since it’s an expensive operation.

Hi Thank you for the response.

I do agree copying the NV12/Gst Buffer copy can be problematic and an over head, but I like to have dynamic applications spawn based on the traffic and the time taken by inference plugin, as they can get delayed.

So if we can de-couple the plugins with different applications/containers then it can remove bottlenecks.

So Is there a way that we can send the Nv12/GST buffer data from one application to the other. using IPC (tcp/udp) we can send the data, but do we have a common plugin/protocol which can parse the data on receiving.

Sorry, what does this mean?

Consider DeepStream application1 with [decoder + streammux] and DeepStream application2 with [nvInfer + tracker] and DeepStream application 3 with [nvTiler + OSD + rtsp]

Consider like above there are 3 applications. Would like to have more DeepStream application2 and less DeepStream application1 and 3, as the inferencing might take some extra time. By spawning different sets of applications, we can direct the traffic accordingly and might help in speeding up the processing.

So we need DeepStream applications to communicate with Nv12.

Note: This is just an option we are looking at to see on increasing the performance

Ok, we will review it internally and give you update ASAP.

We don’t recommend to do that since it will drop perf, please refer Programming Guide :: CUDA Toolkit Documentation