DeepStream Container is unable to connect to Triton Inference Server Container through GRPC

Complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) : T4
• DeepStream Version : 6.0.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 8+
• NVIDIA GPU Driver Version (valid for GPU only) 470
• Issue Type( questions, new requirements, bugs) Bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) Just use one of the DeepStream GRPC Samples and try connecting to a running triton inference server
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I would like to connect my deepstream application to triton inf server via grpc client. Both of them are running in individual containers hosted on separate pods.

However, whenever I launch the deepstream app, I get the pgie creation error. I deduced that the issue was primarily because, deepstream was also looking for the nvinferserver plug-in. I mean practically it shouldn’t matter but that’s what I observed.

When i run my deepstream app say in a deepstream-triton container, it works successfully because of the availability of gst-nvinferserver plugin, that being the container has triton libs by default.

However, this is not the correct approach and I would like to leverage my deepstream pod to connect to trtis pod. Is there a way I can bypass the availability of gst-nvinferserver plugin in deepstream env?

What are these containers? Your own container or deepstream containers in DeepStream | NVIDIA NGC?

Please refer to Gst-nvinferserver — DeepStream 6.0.1 Release documentation for the gst-nvinferserver limitations.

Pulled two deepstream containers from ngc:

  1. 6.0.1 base: Houses standalone deepstream app
  2. 6.0.1 triton: Leveraging for running trtis infer server

Please refer to Gst-nvinferserver — DeepStream 6.0.1 Release documentation for the gst-nvinferserver limitations.

Well, I have had already gone through the doc multiple times. . Alas, I cannot find any section on limitations. Could you please help me with that?

However, I found this.

Also, found this: " The message PluginControl in nvdsinferserver_plugin.proto is the entry point for this config-file". The config being that of a gst-nvinferserver message.

Does this mean, it’s a mandate to have the plugin in deepstream container to interact with individual trtis?

Yes. deepstream nvinferserver needs TensorRT Inference Server.

1 Like

Thanks for confirming

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.