• Hardware Platform (Jetson / GPU)
Jetson Orin • DeepStream Version
6.1.1 • JetPack Version (valid for Jetson only)
5.0.2 • TensorRT Version
8.4.1-1+cuda11.4
Hello I am unable to get the parent object of my secondary detector:
print(obj_meta.parent.object_id)
AttributeError: 'NoneType' object has no attribute 'object_id'
If I use sgie2.get_static_pad("src") it works and I get the right values for the NvDsObjectMeta object but I am cropping the detections using opencv therefore I can’t use that because I get the following message:
get_nvds_buf_Surface: Currently we only support RGBA color Format
I am already using nvvidconv1 and filter1 to convert the frames to RGBA
Could you try to use our demo code first and add probe function to get the parent para?
1.You can find a similar demo below: https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/tree/master/apps
2.Run the demo code and find if it still has the problem.
3.Modify your own code by refering our demo code.
Thanks
@yuweiw my code is based on deepstream-imagedata-multistream-redaction.py there tiler_sink_pad = tiler.get_static_pad("sink") is being used.
I removed the tiler to use nvstreamdemux instead. I got the example from: deepstream_demux_multi_in_multi_out.py, there pgie_src_pad = pgie.get_static_pad("src") is being used but I can’t use that because I am cropping images using opencv.
I also modified it with deepstream-test4.py to be able to send JSON payloads through Kafka, there osdsinkpad = nvosd.get_static_pad("sink") is being used and using nvosd gives me null values for the obj_meta object parent.
I already tested the examples and the examples work fine.
My code is basically a combination of those three demos. I can see the Rtsp stream outputs with bboxes for all input streams and The messages are being send through Kafka. Could you please help me look into it?
@yuweiw Since using sgie2.get_static_pad("src") seems to give me the right output for obj_meta.parent.object_id but giving me the error: get_nvds_buf_Surface: Currently we only support RGBA color Format.
I am converting the numpy array detection to base 64, so that is why I need opencv but,
Is there any other way that I can crop the detections and convert them to base64 string other than using opencv?
Edit:
Refering to this, very similar to my case. There is no way to get the parent object after nvstreamdemux. Makes sense I can only utilize sgie2.get_static_pad("src") to get the parent. How can I pass the detection images and the object parent using nvstreamdemux?
@user133662 , We sugget you add nvvideoconvert plugin before SGIE to change the video format to RGBA. The parent para is NULL after the streammux, demux, tiler and videoconvert.
@yuweiw Thanks! Adding nvvideoconvert plugin before SGIE worked! I also had to move nvmsgconv and nvmsgbroker plugins before SGIE. I have another question:
In deepstream-imagedata-multistream-redaction.py the following nvvideoconvert plugins being created: