Deepstream6.0 Metadata processing

Hello, I’m attempting to deep copy the frame meta so that I can handle it using a pub sub approach in a different thread. I attempt to iterate through the obj meta list in another thread, but I get a segmentation error. Even though I check to see if there are any objects before adding data to the queue, I sometimes still get 0 when I print the num obj meta attribute. What could be the cause of this?





I have no issues handling the other attributes of the Frame struct, but I only encounter this error when processing metadata.
ThankYou,
Chaki

NVIDIA RTX2070• Hardware Platform (Jetson / GPU)**
6.1• DeepStream Version**
8.2.5.1• TensorRT Version**

Could you use the gdb tool to check why it got a segmentation error?
What do you mean about this:

Sure, the problem is that when its turn comes, this obj meta list becomes null because I pass it to the queue as an attribute of the object to be processed by other threads. This frame meta is still overwritten, despite the fact that I deep copied it to avoid overwriting it. I first check to see if any objects have been detected, and then I add them to the queue.
Screenshot from 2023-03-01 12-22-50

Could you try to use nvds_copy_obj_meta to copy the object meta too?

Do you mean looping through the obj meta list and copying obj meta? Isn’t nvds copy obj meta list sufficient for copying obj meta?

Bacically, it has copied everything with nvds_copy_frame_meta. But your problem seems to be a shallow copy. Could you add some debug info about frame->frame_meta after the nvds_copy_frame_meta API?


The data appears to have been deep-copied. I’m looking forward to doing more debugging to figure out where it gets overwritten.

When I push the buffer to the downstream element, the deep copied frame meta is overwritten because I added data breakpoints for the frame meta attribute num obj meta.


So, when did you add the meta to batch after copying the meta? nvds_add_frame_meta_to_batch

I’m not intending to add any data to the batch; I’m passing the incoming data to the downstream element without any transformation so that the pipeline can continue while I add the incoming data to the queue to be consumed by threads to save the images and meta to disk.

If you haven’t add the meta to batch, I don’t know why the data changed at the gst_pad_push. The API won’t change any data. It just pushes gstbuffer to the downstream. There may be some problems in your complete code logic.
We sugget you refer to out source code to save images: deepstream-image-meta-test:

osd_sink_pad_buffer_probe

It uses hardware acceleration and is more efficient. And it doesn’t have to copy any meta datas.

I’m attempting to create my own plugin with a number of additional phases, but I’m still baffled as to why the data keeps getting overwritten.

We are baffled too. But the meta is bound to gstbuffer, maybe your asynchronous processing caused the problem. You can try to modify it to synchronous and check whether there is any problem. Or you can try to add your patch to our open source code and check it.

To make it reproducible, I applied the same patch to NVIDIA Deepstream’s open-source plugin gst-dsexample. So I simply added the generateJsonData function to my gst-dsexample_optimized.cpp file. I deep copy a sample frame_meta to a global variable only once, and every time I get a new buffer from an upstream element, I call generateJsonData on this variable. I get a segmentation fault in line 813 after a few calls.
When an obj_meta changes, I set a breakpoint to see where it gets overwritten. When I pass the buffer to the downstream element (gst_pad_push), the debugger shows that it is overwritten. I just don’t understand how a deep-copied object could be overwritten.


You can see the source code below to reproduce the same error

According to the documentation of this function: “In all cases, success or failure, the caller loses its reference to buffer after calling this function.”. Actually, I was using nvds_copy_frame_meta for that very purpose. Why is there a nvds_copy_frame_meta function if I lose my reference when I execute it?

Gstreamer is basically all pointers, you can’t copy the pointers to a global variable unless everything is sync’d.

As for your seg fault.

Use a std::shared_ptr or a std::unique_ptr.

This will ensure that the pointer stays alive and points to the correct memory in your generateJson function, even if the parent function goes out of scope.

Ensure you use std::move(), ref, and unref if you are going to be using the ptr in a new thread, otherwise you’ll have a gnarly memory leak.

As for changing data:

Again, gstreamer is basically all pointers, so if you copy a pointer you aren’t copying the data itself, rather a location in memory in which that data is stored. So if you modify the copy you also modify the original.

Many people have trouble understanding this.

1 Like

If you acquire the Metadata from the pool, it is still bound to the gstbuffer. So if you push the buffer to next plugin, the pointer of the metadata will change. You should create a new metadata if you want to save the paras. Or you can design your own structure to save the paras you used in the metadata by yourself.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.