I am running inference in deepstream and compare 2 modes of camera capture : continuous and trigger.
Pipeline is pylonsrc → converter → streammux → nvinfer
I observe considerable slowdown with trigger mode
continuous inference 54ms
trigger inference 95ms
could it be the case that jetson gpu becomes idle or goes to some other state that requires reinitialization for inference in trigger mode?
Please provide complete information as applicable to your setup. And what’s the continuous and trigger mode for your camera?
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used
After further experimentation
jetson_clocks and higher power settings seem to even out the difference between continuous and trigger mode.
Will share other details later if there will be new findings.