• Hardware Platform (Jetson / GPU) Tesla T4
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.1
ubuntu@ip-172-31-7-39:~/sriharsha/videos/testing$ nvidia-smi
Tue Feb 1 05:54:48 2022
±----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 |
| N/A 30C P0 25W / 70W | 1467MiB / 15109MiB | 22% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 6066 C python3 1335MiB |
±----------------------------------------------------------------------------+
We have python deepstream implementation of streaming(inferencing ) through RTSP and saving the video through(splitmuxsink) .
Kindly provide the ways on how to improve the latency which is caused due to saving of VIDEOS.