• Hardware Platform (Jetson / GPU) → GPU
• DeepStream Version → 5.1
• TensorRT Version → 220.127.116.11
• NVIDIA GPU Driver Version (valid for GPU only) → 470.57.02
• Issue Type( questions, new requirements, bugs) → Question
Hello! I need help splitting an inference pipeline, ideally between different machines, but different processes would be a good start. The most important detail is that the first pipeline should only take care of the inference, without tiler, nvosd, encoder, etc…
When both pipelines are in the same process, I can easily do something like this:
The first pipeline reads from the sources and do the inferences, sending the batched data directly to a sink:
When it's needed, the second pipeline get the buffers, draw the bboxes and send to a sink:
Using Ridge Run’s interpipe plugins, like in the example above, works great when I need to send data to other pipeline in the same process, but it would be great if I could do the same but sending data to other process or even other machine.
I tried doing similar things as the example above using:
- pipeline1 shmsink → pipeline2 shmsrc
- pipeline1 rtpgstpay + udpsink → pipeline2 udpsrc + rtpgstdepay
- pipeline1 ipcpipelinesink → pipeline2 ipcpipelinesrc
But there’s no NvDsBatchMeta in the second pipeline when I look for it using probes, and when I try to add a tiler the app gives me this error:
0:00:01.229260442 328 0x25fb9e0 WARN nvmultistreamtiler gstnvtiler.cpp:606:gst_nvmultistreamtiler_transform: error: GstNvTiler: FATAL ERROR; NvDsMeta->NvDsBatchMeta missing in the input buffer
How can I send buffers with Deepstream metadata to a pipeline in another process?
Any help would be greatly appreciated.