Gst-nvinferserver plugin configurations for dino fp 32 model

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU): Tesla T4
• DeepStream Version: 6.4
• JetPack Version (valid for Jetson only)
• TensorRT Version: 8.6.1.6-1+cuda12.0
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs): question

Hi I want to use Gst-nvinferserver plugin with my deepstream pipeline for dino fp 32 model, can I know the properties and parameters I need to add for the configurations file for the plug in (protocol buffer format). specifically for this dino fp 32 model? (Like which configurations are important to include for dino fp 32). Or even if you could provide resources, to how to setup this protocol buffer configurations for different type of models that would be useful

Do the pgie_retail_object_detection_binary_dino_tao_config.txt meet your needs?

I will check that out, Thanks :)