Hello, I’m attempting to deep copy the frame meta so that I can handle it using a pub sub approach in a different thread. I attempt to iterate through the obj meta list in another thread, but I get a segmentation error. Even though I check to see if there are any objects before adding data to the queue, I sometimes still get 0 when I print the num obj meta attribute. What could be the cause of this?
Sure, the problem is that when its turn comes, this obj meta list becomes null because I pass it to the queue as an attribute of the object to be processed by other threads. This frame meta is still overwritten, despite the fact that I deep copied it to avoid overwriting it. I first check to see if any objects have been detected, and then I add them to the queue.
I’m not intending to add any data to the batch; I’m passing the incoming data to the downstream element without any transformation so that the pipeline can continue while I add the incoming data to the queue to be consumed by threads to save the images and meta to disk.
If you haven’t add the meta to batch, I don’t know why the data changed at the gst_pad_push. The API won’t change any data. It just pushes gstbuffer to the downstream. There may be some problems in your complete code logic.
We sugget you refer to out source code to save images: deepstream-image-meta-test:
It uses hardware acceleration and is more efficient. And it doesn’t have to copy any meta datas.
We are baffled too. But the meta is bound to gstbuffer, maybe your asynchronous processing caused the problem. You can try to modify it to synchronous and check whether there is any problem. Or you can try to add your patch to our open source code and check it.
To make it reproducible, I applied the same patch to NVIDIA Deepstream’s open-source plugin gst-dsexample. So I simply added the generateJsonData function to my gst-dsexample_optimized.cpp file. I deep copy a sample frame_meta to a global variable only once, and every time I get a new buffer from an upstream element, I call generateJsonData on this variable. I get a segmentation fault in line 813 after a few calls.
When an obj_meta changes, I set a breakpoint to see where it gets overwritten. When I pass the buffer to the downstream element (gst_pad_push), the debugger shows that it is overwritten. I just don’t understand how a deep-copied object could be overwritten.
According to the documentation of this function: “In all cases, success or failure, the caller loses its reference to buffer after calling this function.”. Actually, I was using nvds_copy_frame_meta for that very purpose. Why is there a nvds_copy_frame_meta function if I lose my reference when I execute it?
Gstreamer is basically all pointers, you can’t copy the pointers to a global variable unless everything is sync’d.
As for your seg fault.
Use a std::shared_ptr or a std::unique_ptr.
This will ensure that the pointer stays alive and points to the correct memory in your generateJson function, even if the parent function goes out of scope.
Ensure you use std::move(), ref, and unref if you are going to be using the ptr in a new thread, otherwise you’ll have a gnarly memory leak.
As for changing data:
Again, gstreamer is basically all pointers, so if you copy a pointer you aren’t copying the data itself, rather a location in memory in which that data is stored. So if you modify the copy you also modify the original.
If you acquire the Metadata from the pool, it is still bound to the gstbuffer. So if you push the buffer to next plugin, the pointer of the metadata will change. You should create a new metadata if you want to save the paras. Or you can design your own structure to save the paras you used in the metadata by yourself.