Inference configuration files for Deepstream for Tao Zoo models

Hi, I’m trying to use the Tao Zoo models with Triton and Deepstream using the nvinferserver component with GRPC.

I found the models and their pbtxt configuration files for Triton in GitHub - NVIDIA-AI-IOT/tao-toolkit-triton-apps: Sample app code for deploying TAO Toolkit trained models to Triton

and I found the configuration files for deepstream deepstream_reference_apps/README.md at master · NVIDIA-AI-IOT/deepstream_reference_apps · GitHub, they work for the nvinfer component, but not with the nvinferserver

can you provide these configuration files? I think it can be valuable for many people using Deepstream with Triton.

Regards.

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)

• DeepStream Version

• JetPack Version (valid for Jetson only)

• TensorRT Version

• NVIDIA GPU Driver Version (valid for GPU only)

• Issue Type( questions, new requirements, bugs)

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi Fanzh,

these are the details:

Hardware Platform (GPU)

• DeepStream Version 6.1.1

• TensorRT Version - multiple

• NVIDIA GPU Driver Version - multiple, we have different servers

• Issue Type - question

sorry for the late reply, currently, please refer to some nvinferserver GRPC samples: deepstream\deepstream\configs\deepstream-app-triton-grpc\source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt,
and GitHub - NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream