Since the tensor output is the modle output, so you need to check whether there is model output from your classifier model when the issue happens. Even the output of the detector is correct, it does not mean the classifier model can classify the object correctly. It is decided by model but not deepstream.
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Cannot get secondary model to work on inference results of detector | 6 | 737 | September 27, 2023 | |
| Inconsistent result when secondary engine ran through DeepStream SDK | 11 | 900 | October 12, 2021 | |
| Classification output is always same when probed at classifier "src" pad and output tensor meta is always None | 5 | 440 | August 2, 2022 | |
| Extract tensor data from secondary inference module | 2 | 602 | March 22, 2022 | |
| Raw tensor output | 8 | 2403 | October 12, 2021 | |
| Implement SecondaryGIE with custom classification model causing Perf to decrease to 0fps | 2 | 385 | October 12, 2021 | |
| Questions about Tensor Metadata in the nvinfer plugin | 2 | 299 | October 12, 2021 | |
| the secondary infer results attached to the streaming showing on the screen in "deepstream-app" | 3 | 554 | October 12, 2021 | |
| Accessing to first detection model from second detection model | 5 | 29 | August 5, 2025 | |
| Secondary inference using nvinferserver after deepstream-ssd-parser | 4 | 1003 | September 4, 2021 |