FPS from Deepstream seems to be shaped

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Nano Devkit and Jetson Xavier AGX
• DeepStream Version 6.0.1
• JetPack Version (valid for Jetson only) 4.6.1
• TensorRT Version 8.2.1

Hi there,

I am trying to understand what is happening with GETFPS, from gpubootcamp/Multi-stream_pipeline.ipynb at master · openhackathons-org/gpubootcamp · GitHub. I just changes the .mp4 source file to a RTSP stream. I tested this code in a Jetson Nano devkit and a Jetson Xavier AGX, getting the same results for both of them.

Nano, RTSP 1920x1080@25fps, 1 stream: 25,0 fps
Nano, RTSP 1920x1080@25fps, 2 streams: 12,4 fps
Nano, RTSP 1920x1080@25fps, 3 streams: 8,2 fps

Nano, RTSP 1280x720@25, 1 stream: 25,0 fps
Nano, RTSP 1280x720@25, 2 streams: 12,4 fps
Nano, RTSP 1280x720@25, 3 streams: 8,2 fps

Xavier AGX, RTSP 1280x720@25, 1 stream: 25,0 fps
Xavier AGX, RTSP 1280x720@25, 2 streams: 12,4 fps
Xavier AGX, RTSP 1280x720@25, 3 streams: 8,2 fps

I expected different frame rates between Nano and Xavier. I also expected different frame rates between 1280x720 and 1920x1080 since the primary detector is a resnet10.

What am I missing?

Thanks in advance,

Flávio Mello

Could you try to change the sink plugin to fakesink and test the result?

sink = make_elm_or_print_err("fakesink", "fakesink", "Sink")

Now I got what I expected.
Xavier AGX, RTSP 1280x720@25, stream(0)=24,8 fps
Xavier AGX, RTSP 1280x720@25, stream(0)=24,8 fps, stream(1)=25,0 fps
Xavier AGX, RTSP 1280x720@25, stream(0)=26,2 fps, stream(1)=25,2 fps, stream(2)=25,2 fps

This is the code of my sink filter. The comments are from the original code, and without comment is the yuweiw tip:

    # Make the UDP sink
    updsink_port_num = 5400
    #sink = make_elm_or_print_err("udpsink", "udp-sink")
    #sink.set_property("host", "224.224.255.255")
    #sink.set_property("port", updsink_port_num)
    #sink.set_property("async", False)
    #sink.set_property("sync", 1)
    #sink.set_property("qos", 0)
    sink = make_elm_or_print_err("fakesink", "fakesink", "Sink")

Can you explain what I am not seeing?

Since you got your expected. What’s your specific question?

You replace my udpsink for an dummy sync. By doing that, The inference pipeline manage to process the 3 streams at 25fps, which is the limit of the data source stream. I think that when I increase the number of steams (from 3 to 300, for instance) the pipeline fps will not be able to keep such throughput and eventually it will decrease to 20, 15, 10, 5 fps.

What I don’t understand is why my original sync was limiting the fps.
Xavier AGX, RTSP 1280x720@25, 1 stream: 25,0 fps
Xavier AGX, RTSP 1280x720@25, 2 streams: 12,4 fps
Xavier AGX, RTSP 1280x720@25, 3 streams: 8,2 fps

And with the dummy sink I get:
Xavier AGX, RTSP 1280x720@25, 1 stream: 25,0 fps
Xavier AGX, RTSP 1280x720@25, 2 streams: 25,0 fps
Xavier AGX, RTSP 1280x720@25, 3 streams: 25,0 fps

From the results, it can be seen that the udp plugin caused this difference. Could you try to set the sync to false and check the result?

I tested the four combinations of sync=0/1 and async=True/False, and the results kept the same as above.

Will there be any issues with the fps if you use file source? The rtsp source you used may affect the fps.
Also could you attach your code to us? We don’t know what exactly you’ve changed.

Yes, I also tested with an .mp4 file as source and it behaves like the RTSP source. In attachment you will find the python code test2.py (18.7 KB) (it has minor changes when compared to gpubootcamp/Multi-stream_pipeline.ipynb at master · openhackathons-org/gpubootcamp · GitHub). The config files are the same.

Your deepstream code is a little old. Please use the latest version and refer to the latest python app for deepstream.
https://github.com/NVIDIA-AI-IOT/deepstream_python_apps
I have tried in my board. The fps is right. You can use the demo in the apps path to test it. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.