How to Properly Write Configuration Files for gst-nvinferserver?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 7
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I am working with the gst-nvinferserver plugin from NVIDIA DeepStream and have encountered difficulties writing configuration files for models on the Triton Inference Server. I need some help in correctly setting up these configurations.

  • What parameters and settings are crucial for the gst-nvinferserver configuration files and how should they be correctly specified?
  • How should tensor dimensions be properly set in these configurations?
  • Are there any working example configuration files for different models that can be used as references?

please refer to DeepStream /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test1, which supports nvinferserver plugin.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.