Migrating from nvinfer to nvinferserver over gRPC

• Hardware Platform (Jetson / GPU) GPU GTX 3080
• DeepStream Version 6.1.1
• TensorRT Version latest deepstream-triton ngc container
**• NVIDIA GPU Driver Version (valid for GPU only)**515
• Issue Type( questions, new requirements, bugs) question and guidance

I am currently working on a project where I need to migrate from nvinfer to nvinferserver using gRPC. Can anyone provide guidance on how to create the necessary .pbtxt files and gie configuration files for deepstream?

Additionally, I am looking for a way to deploy pretrained models without retraining with TAO. Mainly object detection for cars and people. Which is the best way to go about this?

I would greatly appreciate any help or resources that can be provided. Thank you in advance.

Best,
Manuel Diez Silva

please refer to deepstream nvinferserver sample: configs\deepstream-app-triton-grpc\source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt, the first model detects objects, the follow models give car’s color, type ,maker.

please refer to deepstream TAO detection sample: GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream