Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU): dPU A40. • DeepStream Version: 6.4-triton-multiarch. • TensorRT Version: 8.6.1.6. • NVIDIA GPU Driver Version (valid for GPU only): 535.146.02. • Issue Type( questions, new requirements, bugs): questions. • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
How can see the fps of the original uri rtsp video like the fps of the console log at the start of the deep stream?
you can use ffplay, VLC or other players to get the original fps. you also can use the following comamnd-line to test the original fps. please refer to this topic.
measuring the actual inference fps is ready-made code. please refer my last comment.
About measuring the origin rtsp fps, you can add a probe function on uridecodebin, and get frame number per second.
please refer to this code code. first create a variable to save frame number, when the probe function is triggered once, add 1 to this value. count the frame number per second.
when I calc the fps on nvurisrcbin’s src get the same fps as the end of pipeline, so how I can add a probe function on nvurisrcbin’s sink to measure the origin rtsp fps?
using “nvurisrcbin + nvinfer + fakesink”, you can’t get the original fps because inference performance will affect the the fps of the pipeline. to take a extreme example, the fps of rtsp source is 30, but inference performance is poor, , maybe the fps of pipeline is only10. you can get the origin fps by “nvurisrcbin +fakesink” pipeline.
What if I enable branching after nvurisrcbin using tee plugin to make all pipeline in branch and fakesink with probe function to measure origin fps in another branch, measuring origin fps can affected by another branch?
sorry for the late reply! tee will not duplicate data, it will send data to every branch by turns. adding tee does not help to measure the original fps.