How to add triton server to deepstream in different device?

Please provide complete information as applicable to your setup.

• Hardware Platform :Jetson xavier
• DeepStream Version:5.0.1
• JetPack Version :4.4
• TensorRT Version:7.1
• Issue Type:new requirements

Hi,

I have achieved nvinferserver plugin function in deepstream with deepstream-app, such as deepstream-app -c source1_primary_detector_nano.txt, and can run it succeed.

However, I want deploy triton server in dGPU and deploy deepstream in jetson, and all pipeline get infer result from triton server, including preprocess and postprocess.

And another solution is start different pipeline and share one nvinferserver in jetson or dGPU, as I observed that there are different server when I start different pipeline, it cost lots resources.

Can you tell me how to achieve it? Or whether it is possible. Thanks!

DeepStream Triton only supports to run on local machine, that is, both Triton client and sereer are in one DS instance.

But DS supports nvmsgbroker to communicate with server, you could take a look if it can work for you.

Thanks!

Hi, mchi,

nvmsgbroker is not my idea choice, as I want the infer cap of server and deepstream pipeline in jetson.

So if triton only support run on local machine, I wander whether multiple pipeline can use only one triton server?

Thanks !

In DeepStream, it’s not supported.

OK, thanks !