Does triton-deepstream support dynamic batching? How to config it?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson tx2
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only) 4.5
• TensorRT Version 7.2.1
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

yes, you can set “max_batch_size” in nvinferserver config.
doc: Gst-nvinferserver — DeepStream 6.1.1 Release documentation
sample: /opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-infer-tensor-meta-test/inferserver/dstensor_pgie_config.txt

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.