I want to know when we try to integrate Nvidia triton inference server with deepstream pipeline with Gst-nvinferserver plugin. (for deepstream version - 6.4)
do we get the features such as model analyzer, performance analyzer tools as well?
can we able to use restapi frameworks like FastAPI with in this plugin to handle the inference requests with triton inference server.
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
There is no update from you for a period, assuming this is not an issue any more. Hence we are closing this topic. If need further support, please open a new one. Thanks.
triton is opensource. triton has provided model analyzer, performance analyzer tools. please refer to the model_analyzer and perf_analyzer.
please refer to nvinferserver explanation link. nvinferserver supports CAPI/GRPC to handle the inference requests with triton inference server. nvinferserver plugin and low-level lib are opensource. please refer to the sample /opt/nvidia/deepstream/deepstream-6.4/sources/apps/sample_apps/deepstream-test1.