Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hi I’m using a Deepstream 6.2 application on Linux GPU. I’m using the Deepstream python bindings. My app pipeline is in charge of running 3 consecutive inferences with 3 different classification models that are running on Triton Inference Server (23.12). The problem I’m facing is the results of the classification for models 2 and 3 are reasonable while results from first model makes no sense since the output doesn’t add up 1. It seems to be outputting random results. You can think the problem is related with the model, however, if I change the order of the models, other models that were working previously starts giving strange results and vice versa.
If I run only one model in this pipelines, after checking with the 3 models, all work perfect individually.
Do you have any clue of waht can be happening there?
Thank you.