Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 5.1.21.2
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.2.1
streammux.link(pgie)
pgie.link(face_embedding)
face_embedding.link(nvvidconv)
nvvidconv.link(nvosd)
nvosd.link(queue)
queue.link(nvvidconv2)
nvvidconv2.link(sink)
# create an event loop and feed gstreamer bus mesages to it
loop = GObject.MainLoop()
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect("message", bus_call, loop)
# Add a probe on the primary-infer source pad to get inference output tensors
pgiesrcpad = pgie.get_static_pad("src")
if not pgiesrcpad:
sys.stderr.write(" Unable to get src pad of primary infer \n")
pgiesrcpad.add_probe(Gst.PadProbeType.BUFFER, pgie_src_pad_buffer_probe, 0)
embedding_calback = face_embedding.get_static_pad("src")
if not embedding_calback:
sys.stderr.write(" Unable to get sink pad of nvvidconv \n")
embedding_calback.add_probe(Gst.PadProbeType.BUFFER, sgie_src_pad_buffer_probe, 0)
I have 2 custom nvinferserver, each nvinferserver do different task (pgie: detection, sgie: recognition).
After detection, i want to create sgie_src_pad_buffer_probe to process results.
How can i do?