The plugin gst-nvmconv does not convert meta_data to json format asynchronously. You can test this by putting a while loop in there, to slow the frame speed down. Is there a way to aysncronously compose json messages? Would a call back function work?
Not sure about which release and platfrom(Jetson platform or desktop GPU) you use. If you don’t use DeepStream SDK 5.0, please upgrade and give it a try.
I am using the Jetson Nano and the deepstream sdk 5.0.
This question is more about the pipeline. I was told on here that the plugins operate in seperate processes but it appears that is not the case when looking at gst-nvmsgconv? Could anyone expand on that?
Asynchronous message conversion is not supported in message converter plugin by default. It is open source code and you would need to customize it.
My understanding is that message conversion is both CPU bound and (should be) trivial. There’s probably not a point to doing it in another thread because you have to wait until the result is finished anyway because the next element is probably going to need it (and the metadata is locked).
If you need the conversion, and everything that follows to be done in a separate thread, you can just use GStreamer’s queue element (and optionally a tee), but Nvidia’s broker can run the I/O bound stuff in its own worker thread anyway, so there’s not much point to that, and there is a small performance penalty.
Thanks that was what I was looking for ( I do it in a probe). It sounds like it doesn’t matter whether it’s done in a probe or using the open source plugin.
Reason I was worried was really about scalability. If there are a hundred objects on the screen that is a hundred messages being composed before the frame completes. I assuming this doesn’t matter as long as you pick the right platform i.e. (nano for under 100) xavier for under (1000)