FPS - Python Sample 2

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) AGX ORIN
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only) 5.1

I have built a face recognition pipeline by writing over python sample 2. I wanted to know how can i capture the exact FPS of the output.
I have added the below two line in the main
fps_stream = GETFPS(1)
print(fps_stream)

however this is printed in the terminal this way

“<common.FPS.GETFPS object at 0xffff9205b280>”

is there a way that i can show FPS in the outputed video or in the terminal

GETFPS is a class. deepstream_python_apps/apps/common/FPS.py at master · NVIDIA-AI-IOT/deepstream_python_apps (github.com)

Please read the code to get the correct interfaces.

i have already imported the GETFPS from FPS.py in the code but is the way i added it to sample 2 is correct?

GETFPS is a class, it is wrong to print it in this way. Please refer to python documents for how to use python class. 9. Classes — Python 3.12.1 documentation. We have provided the source code, please use the interface in the correct way.

i tried to follow python-3 sample the way FPS are added however running the app after adding FPS did generate error in the log which mostly related to creating engine file. Although engines were created successfully and pipeline was running with no issue before adding the FPS part.
please advise what could be possible the reason
Error by adding FPS.txt (5.0 KB)

The log shows it is a nvinfer configuration issue, it has nothing to do with GETFPS.

Yes i do understand but this issue didnt raise until i added the below lines which are extracted from python sample-3
from common.FPS import PERF_DATA
in PGIE
stream_index = “stream{0}”.format(frame_meta.pad_index)
global perf_data
perf_data.update_fps(stream_index)
in main
osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)
Glib.timeout_add(5000, perf_data.perf_print_callback)

I tried to add the GETFPS API in the python test2 sample deepstream_python_apps/apps/deepstream-test2/deepstream_test_2.py at master · NVIDIA-AI-IOT/deepstream_python_apps (github.com), it works.

Please compare this file with the original sample
deepstream_test_2.py (13.6 KB)

thank you! it worked.
If i have modified my sample 2 python to involve

  1. osd_sink_pad_buffer_probe
  2. sgie_sink_pad_buffer_probe
    as this directory deepstream-facenet/deepstream_test_2.py at master · riotu-lab/deepstream-facenet · GitHub
    i tried to add the same to the sgie sink as in osd so i can find FPS after the recognition and i have added in the main
    osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)
    GLib.timeout_add(5000, perf_data.perf_print_callback)
    and
    vidconvsinkpad.add_probe(Gst.PadProbeType.BUFFER, sgie_sink_pad_buffer_probe, 0)
    GLib.timeout_add(5000, perf_data.perf_print_callback)

however stream for sgie will always give zero

  1. The FPS will be the same at any pad after nvstreammux.
  2. The “update_fps” and “get_fps” should be used in pair. Please read the sample code carefully. I think we have given all samples and source code. Please read and understand the sample code.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.