Is there some parameters to turn the infer plugin on?

I modified the primary-gie from detection to classification in the sample deepstream-app. But it seems only the first frame in a\every batch is conducted. A text is overlaid on the frame. This happens even with the original sources.
Is there some parameters to turn the infer plugin on?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

• Hardware Platform (Jetson / GPU) → dGPU
• DeepStream Version → 5.1
• TensorRT Version → 7.2
• NVIDIA GPU Driver Version (valid for GPU only) → 465.82
• docker → nvcr.io/nvidia/deepstream:5.1-21.02-triton