Getting separate metadata from different models

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): TITAN RTX
• DeepStream Version: 5.1
• TensorRT Version: 7.2.1
• NVIDIA GPU Driver Version (valid for GPU only): 460
• Issue Type( questions, new requirements, bugs): questions

I’m using the deepstream python API. I have used two detection models in the similar way used in deepstream test 2 python example. Is there a way by which I can distinguish or print the model id and get metadata separately from each model? I’ve set gie-unique-id separate for each model and specified that in the config file.

Hey, good question, you can find unique_component_id field under NvDsObjectMeta NvDsObjectMeta — Deepstream Deepstream Version: 5.1 documentation

1 Like

I’ve read that each time the nvinfer plugin is called, new instance of the model is created. Does that mean that it gets multi threaded or does it run sequentially?

My concern is that, per say, I’m getting different latency rate for two different models and if they are running sequentially, a model instance might have to wait before the first model does its inference or vice versa and that might lead to the information loss. So to summarise, is there a way where I can do inference without running two scripts for two models and have no information loss?

Yeah, multiple thread and multiple cuda streams for different models, you can refer the nvdsinfer_context_impl.cpp → queueInputBatch and dequeueOutputBatch

1 Like

Sorry, I didn’t get

Thanks for clearing my doubt, if a seperate thread is getting created for each model, then the problem that I mentioned wouldn’t occur. My perception was that the models get added sequentially as we link the primary detector to the secondary detector, for example in the python api, we do it like pgie.link(sgie). Thus the inference from the primary detector will take place first and then the secondary detector will kick in.