Unable to get parent for NvDsObjectMeta object

• Hardware Platform (Jetson / GPU)
Jetson Orin
• DeepStream Version
6.1.1
• JetPack Version (valid for Jetson only)
5.0.2
• TensorRT Version
8.4.1-1+cuda11.4

Hello I am unable to get the parent object of my secondary detector:

print(obj_meta.parent.object_id)
AttributeError: 'NoneType' object has no attribute 'object_id'

This is my pipeline:

# linking elements
  queue.link(nvvidconv1)
  nvvidconv1.link(filter1)
  filter1.link(nvvidconv)
  nvvidconv.link(nvosd)
    
  nvosd.link(tee)
  queue1.link(msgconv)
  msgconv.link(msgbroker)
  
  queue2.link(nvvidconv_postosd)
  nvvidconv_postosd.link(caps)
  caps.link(encoder)
  encoder.link(rtppay)
  rtppay.link(sink)

How can I fix this?

Which plugin do you add probe function to? You can try to change the position of the probe function.

1 Like

@yuweiw I use :

nvosd_sink_pad = nvosd.get_static_pad("src")
nvosd_sink_pad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)

@yuweiw

If I use sgie2.get_static_pad("src") it works and I get the right values for the NvDsObjectMeta object but I am cropping the detections using opencv therefore I can’t use that because I get the following message:

get_nvds_buf_Surface: Currently we only support RGBA color Format

I am already using nvvidconv1 and filter1 to convert the frames to RGBA

Could you try to use our demo code first and add probe function to get the parent para?
1.You can find a similar demo below:
https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/tree/master/apps
2.Run the demo code and find if it still has the problem.
3.Modify your own code by refering our demo code.
Thanks

1 Like

@yuweiw my code is based on deepstream-imagedata-multistream-redaction.py there tiler_sink_pad = tiler.get_static_pad("sink") is being used.
I removed the tiler to use nvstreamdemux instead. I got the example from:
deepstream_demux_multi_in_multi_out.py, there pgie_src_pad = pgie.get_static_pad("src") is being used but I can’t use that because I am cropping images using opencv.

I also modified it with deepstream-test4.py to be able to send JSON payloads through Kafka, there osdsinkpad = nvosd.get_static_pad("sink") is being used and using nvosd gives me null values for the obj_meta object parent.

I already tested the examples and the examples work fine.
My code is basically a combination of those three demos. I can see the Rtsp stream outputs with bboxes for all input streams and The messages are being send through Kafka. Could you please help me look into it?

@yuweiw Since using sgie2.get_static_pad("src") seems to give me the right output for obj_meta.parent.object_id but giving me the error: get_nvds_buf_Surface: Currently we only support RGBA color Format.

I am converting the numpy array detection to base 64, so that is why I need opencv but,
Is there any other way that I can crop the detections and convert them to base64 string other than using opencv?

Edit:
Refering to this, very similar to my case. There is no way to get the parent object after nvstreamdemux. Makes sense I can only utilize sgie2.get_static_pad("src") to get the parent. How can I pass the detection images and the object parent using nvstreamdemux?

@user133662 , We sugget you add nvvideoconvert plugin before SGIE to change the video format to RGBA. The parent para is NULL after the streammux, demux, tiler and videoconvert.

1 Like

@yuweiw Thanks! Adding nvvideoconvert plugin before SGIE worked! I also had to move nvmsgconv and nvmsgbroker plugins before SGIE. I have another question:

In deepstream-imagedata-multistream-redaction.py the following nvvideoconvert plugins being created:

nvvidconv1 =Gst.ElementFactory.make("nvvideoconvert", "convertor1")
nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
nvvidconv_postosd =Gst.ElementFactory.make("nvvideoconvert", "convertor_postosd")

and linked to other elemets in the pipeline:

print("Linking elements in the Pipeline \n")
streammux.link(pgie)
pgie.link( nvvidconv1)
nvvidconv1.link(filter1)
filter1.link(tiler)
tiler.link(nvvidconv)
nvvidconv.link(nvosd)
nvosd.link(nvvidconv_postosd)
nvvidconv_postosd.link(caps)
caps.link(encoder)
encoder.link(rtppay)
rtppay.link(sink)

I underestand that the use of nvvidconv1 is to convert the video format to RGBA. But why is nvvidconv and nvvidconv_postosd needed?

You can open a new topic about the new question.It is convenient for other people to refer the questions. Thanks

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.