Does the deepstream_python_apps have performance hit compare to native apps?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Jetson Nano 2G
• DeepStream Version
6.0, install by .deb package
• JetPack Version (valid for Jetson only)
latest(by SD card image at Nov 2021)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I’ve followed the deepstream_python_apps to deployed the python apps into the Jetson Nano 2G, from very limited testing, by running deepstream-test1 and deepstream-test4 for both version(the build-in C version, and the Python version), I see the frame rate for Python version is obvious lower by eye(still don’t know how to print out the framerate) as the lagging happens more in video playing window, and meanwhile can see the logging in terminal console:

Frame Number = 240 Vehicle Count = 16 Person Count = 3
Frame Number = 241 Vehicle Count = 13 Person Count = 3
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstEglGlesSink:nvvideo-renderer:
There may be a timestamping problem, or this computer is too slow.
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstEglGlesSink:nvvideo-renderer:
There may be a timestamping problem, or this computer is too slow.
Frame Number = 242 Vehicle Count = 14 Person Count = 3
Frame Number = 243 Vehicle Count = 13 Person Count = 3

  1. Is this the expected behavior?
  2. If yes, does the Python version suits the object detection scenario in production deployment?
  3. If in my scenario, the detecting objects would stay in front of camera for seconds, so is there any major issues that I should aware to running inference at low perf(no realtime)?

Suppose shouldn’t see big performance drop. Please check the configure.

thanks, will check more.

so use the python for production is an option, correct?

hi kesong,

Today, I’ve fresh installed a Jetson NX with full performance 20W 6COREs here(just downloaded the official SD card image), installed the DeepStream 6 by .deb package, with all default settings, and clone the python app from repo, after everything setup, run the:

python3 deepstream_test_1.py /opt/nvidia/deepstream/deepstream/sample/streams/sample_720p.h264`

still see the huge lagging, from my exp, the frame rate is less than 10, and console printed :

Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstEglGlesSink:nvvideo-renderer:
There may be a timestamping problem, or this computer is too slow.

while running the native test1 looks much better, and no warnnings were printed out in terminal.
Is this the expecting perf?

[edit0]:
tried with --no-display, the perf is much better, might be problem of the video player?

What is the status here? Is there still any issue?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.