If I use sgie2.get_static_pad("src") it works and I get the right values for the NvDsObjectMeta object but I am cropping the detections using opencv therefore I can’t use that because I get the following message:
get_nvds_buf_Surface: Currently we only support RGBA color Format
I am already using nvvidconv1 and filter1 to convert the frames to RGBA
@yuweiw my code is based on deepstream-imagedata-multistream-redaction.py there tiler_sink_pad = tiler.get_static_pad("sink") is being used.
I removed the tiler to use nvstreamdemux instead. I got the example from: deepstream_demux_multi_in_multi_out.py, there pgie_src_pad = pgie.get_static_pad("src") is being used but I can’t use that because I am cropping images using opencv.
I also modified it with deepstream-test4.py to be able to send JSON payloads through Kafka, there osdsinkpad = nvosd.get_static_pad("sink") is being used and using nvosd gives me null values for the obj_meta object parent.
I already tested the examples and the examples work fine.
My code is basically a combination of those three demos. I can see the Rtsp stream outputs with bboxes for all input streams and The messages are being send through Kafka. Could you please help me look into it?
@yuweiw Since using sgie2.get_static_pad("src") seems to give me the right output for obj_meta.parent.object_id but giving me the error: get_nvds_buf_Surface: Currently we only support RGBA color Format.
I am converting the numpy array detection to base 64, so that is why I need opencv but,
Is there any other way that I can crop the detections and convert them to base64 string other than using opencv?
Refering to this, very similar to my case. There is no way to get the parent object after nvstreamdemux. Makes sense I can only utilize sgie2.get_static_pad("src") to get the parent. How can I pass the detection images and the object parent using nvstreamdemux?