SGIE get wrong tensor output meta

• Hardware Platform (Jetson / GPU) 1080Ti
• DeepStream Version 6.0.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.0.1
• NVIDIA GPU Driver Version (valid for GPU only) 470
• Issue Type( questions, new requirements, bugs)
Hi, my pipeline is pgie → sgie → … I use NvInferServer for pgie and NvInfer for sgie but the output tensor meta i got at sgie is wrong. Concretely, my output tensor meta of sgie is a vector, suppose i got 2 vector from 2 object and i try to subtract them each other. Sometime the result is 0, it mean that the output are same although those 2 object are different and so my pipeline doesn’t work correctly. I also try to use NvInfer for pgie but the result are same. What’s happen here?
I look forward to getting help from everyone. Thanks very much!
Those are my config file:
config_infer_primary_detector_r50.txt (2.8 KB)
config_recognition.txt (376 Bytes)
source1_primary_recognition_kafka.txt (6.4 KB)

Can you chech if the output of your model is resonable?

well i don’t see what you mean by that but for more details, my sgie work when i use nvinferserver but it is freeze when 3 or more object appear so i try to use nvinfer instead and i face the issue above .

Please try larger pool size, see Gst-nvinferserver — DeepStream 6.0.1 Release documentation

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.