Latency in streaming, through RTSP and saving video (splitmuxsink) through TEE

• Hardware Platform (Jetson / GPU) Tesla T4
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.1

ubuntu@ip-172-31-7-39:~/sriharsha/videos/testing$ nvidia-smi
Tue Feb 1 05:54:48 2022
±----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 |
| N/A 30C P0 25W / 70W | 1467MiB / 15109MiB | 22% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 6066 C python3 1335MiB |
±----------------------------------------------------------------------------+

We have python deepstream implementation of streaming(inferencing ) through RTSP and saving the video through(splitmuxsink) .

Kindly provide the ways on how to improve the latency which is caused due to saving of VIDEOS.

Sorry for the late response, is this still an issue to support? Thanks

Do you mean you need to improve the end to end latency of the whole pipeline? If so, please refer to Troubleshooting — DeepStream 6.3 Release documentation

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.