Request for Complete config.pbtxt Template and Documentation for nvinferserver in DeepStream

I’m working with DeepStream and the nvinferserver plugin, integrating with Triton Inference Server. I’m using a custom config.pbtxt file for my model configuration, similar to the example below:

name: "pose_classification_tao"
platform: "tensorrt_plan"
max_batch_size: 16
input [
  {
    name: "input"
    data_type: TYPE_FP32
    dims: [ 3, 300, 34, 1 ]
  }
]
output [
  {
    name: "fc_pred"
    data_type: TYPE_FP32
    dims: [ 6 ]
    label_filename: "labels.txt"
  }
]
dynamic_batching { }

My question is:
Is there an official and complete template or documentation available for the config.pbtxt format used with the nvinferserver plugin? Specifically, I’m looking for a reference similar to the DeepStream documentation for GStreamer plugin properties, such as this page:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinferserver.html#gst-properties

I’m hoping to find:

  • All supported parameters in the config.pbtxt
  • Accepted values and data types
  • Clear descriptions of how each parameter influences model execution or plugin behavior
  • Any DeepStream-specific extensions, constraints, or best practices

Current examples and guides often cover only partial use cases. A full reference or schema would be very helpful for customizing configurations and troubleshooting integration with complex models.

The “nvinfersrever” is one implementation of the Triton client. The config.pbtxt file is the configuration of Triton server part. The Triton server document is in triton-inference-server/server: The Triton Inference Server provides an optimized cloud and edge inferencing solution.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.