When CenterFace joins multiple RTSP data streams, the reasoning speed becomes slower

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson)
**• DeepStream Version5.1
• JetPack Version (4.5)
**• TensorRT Version7.x
Hi!
I successfully run the centerface model on Jetson nano and get the correct results(13FPS),However, when four RTSP data(1080P) streams are connected, the reasoning speed becomes slower(3FPS),How can I improve my reasoning speed?
I use the TRT model provided by deepstream_triton_model_deploy/centerface at master · NVIDIA-AI-IOT/deepstream_triton_model_deploy · GitHub
image
I made the changes based on example python Deepstream-test3.This is my configuration parameter.

For ferformance improvement, please refer to Troubleshooting — DeepStream 6.1.1 Release documentation