Mixed predicts from Triton Inference Server when connected several deepstream pipelines via gRPC

• GTX 1060, RTX 2070, RTX 3060Ti
nvcr.io/nvidia/deepstream:6.0.1-triton, nvcr.io/nvidia/deepstream:6.0.1-triton
• NVIDIA GPU Driver Version 470, 510
• Issue Type - bug

My pipeline contains a nvinferserver plugin that connects to a running triton server in a container (nvcr.io/nvidia/tritonserver:21.08-py3) via grpc and receives model predictions back from it.

When only one deepstream pipeline is connected to the triton server via grpc, then everything is ok, but when I run several pipelines, then periodically I get mixed predictions.

What could be the reason for this behavior?

are the tensors returned from the two models mixed?

I have mixed tensor only from sgie. When i replace plugin from nvinferserver to nvinfer - everything is ok.

Each pipeline has a SGIE, and every SGIE is using nvinferserer to connect to remote Triton Server, right?
What does pipeline mean? Does every SGIE run the same model?



pipeline - chain of plugins (as in Gstreamer)

Can you share the code where you check the output?

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.