NvInferserver Triton returning wrong results in model randomly

Please provide complete information as applicable to your setup.

• Hardware Platform= GPU
• DeepStream Version: 6.2
• TensorRT Version: 8.5.2-1+cuda11.8
• NVIDIA GPU Driver Version : 525.147.05

Hello, we are creating an application that is able to run in the same pipeline several requests to classification models in line to Triton 23.01. For instance, we have a model to check wether if in a frame there is a dark cloud or not, another to see if the sky is blue or not and another to check the weather.

The 2 first ones are using the same input size, but the third one has different input size. We have discovered that, whenever putting in the chain 1 model that has a different input from the others, 1 of the models is giving wrong results. In this case, the weather model has 299 by 299 input, the other 2 models have 384 by 540.

We have been doing some debugging but we cannot see why always 1 model is giving bad results. If we eliminate from the chain the weather model, the other 2 models give the results as expected.

Could be this any kind of bug in the nvinferserver plugin? How can we check if the results given by Triton are ok?

Hope anyone can help me with this. Thank you!

There is no update from you for a period, assuming this is not an issue any more. Hence we are closing this topic. If need further support, please open a new one. Thanks.
what is the main media pipeline? are you using nvinferserver CAPI or gRPC mode? can you try the latest DS6.4? Thanks!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.