Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version: 6.1.1
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs): Question
I want to run triton-deepstream server and use deepstream rtsp-in-rtsp-out app so I can analyse a stream and stream the output further. Do I need to also setup triton client with gRCP? Or what is the best approach to this?
The easiest way is to run it in triton docker (nvcr.io/nvidia/deepstream:6.x.x-triton) which has Triton server installed already, it is a local server thus gRPC client is not needed. Of course, you can use dedicated Triton server and in this case you need to configure gRPC and remote server related config, you can refer the document for nvinferserver for details.
So if I want to have a remote Triton server I also need to setup Triton client, right? In order to send the stream to the server
So basically if I follow what says here:
In addition to native Triton server, gst-nvinferserver supports the Triton Inference Server running as independent process. Communication to the server happens through gRPC. Config files to run the application in gRPC mode are located at
samples/config/deepstream-app-triton-grpc . Follow the instructions in
samples/configs/deepstream-app-triton-grpc/README to run the samples.
I should be able to setup communication between my app and triton server and analyse streams via gRPC.
Is this still an issue to support? Thanks
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.