Complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) : T4
• DeepStream Version : 6.0.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 8+
• NVIDIA GPU Driver Version (valid for GPU only) 470
• Issue Type( questions, new requirements, bugs) Bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) Just use one of the DeepStream GRPC Samples and try connecting to a running triton inference server
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I would like to connect my deepstream application to triton inf server via grpc client. Both of them are running in individual containers hosted on separate pods.
However, whenever I launch the deepstream app, I get the pgie creation error. I deduced that the issue was primarily because, deepstream was also looking for the nvinferserver plug-in. I mean practically it shouldn’t matter but that’s what I observed.
When i run my deepstream app say in a deepstream-triton container, it works successfully because of the availability of gst-nvinferserver plugin, that being the container has triton libs by default.
However, this is not the correct approach and I would like to leverage my deepstream pod to connect to trtis pod. Is there a way I can bypass the availability of gst-nvinferserver plugin in deepstream env?