What is the real time performance of a large model like yolov5?

I used to test the official deepstream-test, which loads a model that makes the final output of the video very smooth.
When I use a large model like yolov5s, the output test video becomes slow. Have you tested the real-time operation of a large model like yolov5s?
The device is on a tx2 and the deepstream version is 6.0
Is it a problem with my configuration? Is the real time performance supposed to be good in theory.

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.

You can test the model performance with “trtxec” tool first. If the model itself is too heavy to be fast enough, you may try to skip some frame inferencing by setting “interval” parameter of nvinfer to achieve higher performance. Gst-nvinfer — DeepStream 6.1.1 Release documentation