I’m using the deepstream python API. I have used two detection models in the similar way used in deepstream test 2 python example. Is there a way by which I can distinguish or print the model id and get metadata separately from each model? I’ve set gie-unique-id separate for each model and specified that in the config file.
I’ve read that each time the nvinfer plugin is called, new instance of the model is created. Does that mean that it gets multi threaded or does it run sequentially?
My concern is that, per say, I’m getting different latency rate for two different models and if they are running sequentially, a model instance might have to wait before the first model does its inference or vice versa and that might lead to the information loss. So to summarise, is there a way where I can do inference without running two scripts for two models and have no information loss?
Yeah, multiple thread and multiple cuda streams for different models, you can refer the nvdsinfer_context_impl.cpp → queueInputBatch and dequeueOutputBatch
Thanks for clearing my doubt, if a seperate thread is getting created for each model, then the problem that I mentioned wouldn’t occur. My perception was that the models get added sequentially as we link the primary detector to the secondary detector, for example in the python api, we do it like pgie.link(sgie). Thus the inference from the primary detector will take place first and then the secondary detector will kick in.