Secondary Model optional Inference

• Hardware Platform (Jetson / GPU) → NVIDIA GeForce GTX 1650
• DeepStream Version → 6.1
• JetPack Version (valid for Jetson only) NA
• TensorRT Version → TensorRT 8.2.5.1
• NVIDIA GPU Driver Version (valid for GPU only) NVIDIA driver 515
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hello, I have a primary detector and secondary model that outputs a Vector of 512 byte data (Recognition).

I’ve also used a tracker from the Python sample applications.

My question is, It doesn’t make any sense to run the secondary model over and over on the same object while I have a tracking ID, It makes sense to run the secondary model only once for each unique object.

How can something like that be implemented? Many thanks.

1 Like

I’ve the same problem, I need to specify which object (tracker ID) will be infer with the secondary model.

Please refer: Probability correlation between nvtracker and nvinfer (sgie) - #5 by Amycao

Hello @kesong , Thanks for your answer.
So just to confirm I have understood the answer. In the secondary model configuration I can specify secondary-reinfer-interval (Which is the Re-inference interval for objects, in frames) Or if it’s left not configured it’s default value is INT_MAX which means the secondary model will Infer each unique object only once? Please correct me if i’m wrong.

Also what’s the correct sequence for this to work?
1- PGIE → Tracker → SGIE
2- PGIE → SGIE → Tracker

Thank you.

Yes, you are right. 1 is the correct sequence.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.