Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
Jetson Nano 2G
• DeepStream Version
6.0, install by .deb
package
• JetPack Version (valid for Jetson only)
latest(by SD card image at Nov 2021)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I’ve followed the deepstream_python_apps to deployed the python apps into the Jetson Nano 2G, from very limited testing, by running deepstream-test1
and deepstream-test4
for both version(the build-in C version
, and the Python version
), I see the frame rate for Python version
is obvious lower by eye(still don’t know how to print out the framerate) as the lagging happens more in video playing window, and meanwhile can see the logging in terminal console:
Frame Number = 240 Vehicle Count = 16 Person Count = 3
Frame Number = 241 Vehicle Count = 13 Person Count = 3
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstEglGlesSink:nvvideo-renderer:
There may be a timestamping problem, or this computer is too slow.
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstEglGlesSink:nvvideo-renderer:
There may be a timestamping problem, or this computer is too slow.
Frame Number = 242 Vehicle Count = 14 Person Count = 3
Frame Number = 243 Vehicle Count = 13 Person Count = 3
- Is this the expected behavior?
- If yes, does the
Python version
suits the object detection scenario in production deployment? - If in my scenario, the detecting objects would stay in front of camera for seconds, so is there any major issues that I should aware to running inference at low perf(no realtime)?