Hardware Platform:jetson orin nano 4G
DeepStream Version :6.2
I use DeepStream YOLO to quantify the YOLOv5S INT8 model, and the theoretical fps can reach 120. When I run the YOLOv5 object detection model with 4 RTSP streams in DeepStream, the running delay increases and the image becomes distorted.
I suspect that there is a delay in my model inference. I used Nsight systems to run the deepstream pipeline and received a report as follows:
Some of my batches are 4ms, while others are close to 500ms
From the above figure, it can be seen that some layers of my model took a lot of time, which is a situation that has never occurred in single RTSP stream detection,and these layers may all be different.
If there is a problem with my model, there may also be related issues with single RTSP stream detection, but in reality, this delay has not occurred. I have provided the overall report file above. Can you help me analyze where the problem lies?
This is my deepstream test code, accompanied by an onnx model. The post-processing is generated using deepstream yolo(GitHub - marcoslucianops/DeepStream-Yolo: NVIDIA DeepStream SDK 7.1 / 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models)