Incoherent results when running multiple consecutive classification models

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi I’m using a Deepstream 6.2 application on Linux GPU. I’m using the Deepstream python bindings. My app pipeline is in charge of running 3 consecutive inferences with 3 different classification models that are running on Triton Inference Server (23.12). The problem I’m facing is the results of the classification for models 2 and 3 are reasonable while results from first model makes no sense since the output doesn’t add up 1. It seems to be outputting random results. You can think the problem is related with the model, however, if I change the order of the models, other models that were working previously starts giving strange results and vice versa.

If I run only one model in this pipelines, after checking with the 3 models, all work perfect individually.

Do you have any clue of waht can be happening there?

Thank you.

  1. could you share the whole media pipeline? which sample are you testing or referring to?
  2. are the three models all classification models? if using models 1->2->3, do you mean the model 1 can’t output the right results while other two can? can you check if all outputs are right if using model 1->2?

Sure,

  1. The pipeline is: Urisrcbin, muxer, nvvidconv, inferenceserver1, inferenceserver2, inferenceserver3, nvvidconv, osd, sink.

  2. If using only models 1 and 2, 2 works, 1 doesn,t.

Thank you.

if only using model 1, the results are right. if using model 1->2, the model1’s results become wrong. please check why adding model 2 will have an effect to model 1. you can add src probe function on inferenceserver1 and inferenceserver2 respectively. then check when the model’s results become wrong.

I’m logging the results gathered from triton right after I get the response. Triton seems to be responding with bad results… But I don’t undertand the reason…

can you try the latest DeepStream 6.4?

Hi again,

I have another clue that can be helpful for you to find out the root cause. If all the models in the chain have the same input shape, there is no problem with the inference results. Once one of the models in the chain has a different input shape, the problems start arising.

There is no update from you for a period, assuming this is not an issue any more. Hence we are closing this topic. If need further support, please open a new one. Thanks.
please refer to deepstream-test2, which has not this issue. the first model 's input shape is 3x368x640, the other model’s input shape is 3x224x224.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.