Deepstream YoloV8 Performance Issues

Hardware Platform Jetson AGX Orin
• DeepStream 6.2
**• JetPack Version not sure, **
• Tegra: 35 (release), REVISION: 3.1, GCID: 32827747, BOARD: t186ref, EABI: aarch64, DATE: Sun Mar 19 15:19:21 UTC 2023
• TensorRT Version: 8.5.2-1+cuda11.4

• Issue Type( questions, new requirements, bugs)
The type of issue is the buffer drop warning:

Warning:> gst-core-error-quark: A lot of buffers are being dropped. (13): gstbasesink.c(3003): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNv3dSink:nv3d-sink:
There may be a timestamping problem, or this computer is too slow.

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
By running deepstream-test3.py with yoloV8 inference,by Marcos Luciano and Ultralyctics, I get buffer framedrop. I would like to know what I can to do try mitigating this…

  1. Run three video-streams Mp4 @ 720p
  2. The model in use is a YoloV8s from

wget https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8s.pt

MUXER_OUTPUT_WIDTH=1280
MUXER_OUTPUT_HEIGHT=720
MUXER_BATCH_TIMEOUT_USEC=4000000
TILED_OUTPUT_WIDTH=1920 #1920(640), 3840(1280)
TILED_OUTPUT_HEIGHT=360 #360(360), 720(720)
...
streammux.set_property('width', 1280)
streammux.set_property('height', 720)

Config-file from deepstream-yolo: config_infer_primary_yoloV8.txt

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I found a forum thread by daneLLL

This does not wor, to just change the output sink and add async.

I have tried to remove the printout from the pgie_src_pad_buffer_probe. That does a-little to performance.
The general problem is that the image shown on screen lags alot and the printout of error as seen below:

Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): gstbasesink.c(3003): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstNv3dSink:nv3d-sink:
There may be a timestamping problem, or this computer is too slow.

I also try changing the muser size and tiled size to smaller, but it does not have any effect. Interesting thing is that the speed on screen (with time-stamp) sometimes updates every 2 second. Sometimes it looks like its full FPS but the error appears. So not stringent in the results.

Any ideas what to play with?

I compiled the C++ program of the same and ran it with yoloV8. That worked without lag.
I will play around with resolution in the config file and see if I can mess it up.

Have you measured the model performance by “trtexec” tool?

If the model performance is poor, you need to optimize the model itself. It is not a DeepStream issue.

If the model performance is good enough while the DeepStream pipeline performance is not as expected. Please check which part takes more resources (E.G. CPU loading, GPU loading, …)

If there is high GPU loading issue, you can use Nsight tool to help to find out the bottleneck.

Hi,

No I have not tested the performance with trtexec. I will do that.
My first assumption is that you can have properties in the config-file that you start looking at
before looking into the model.

I found for example that I could lower the FP from 32 to 16. I will also try to get 8 to work.
I will also have a look at Nsight to see if that gives me some info.

Thanks for the tips, I keep this thread updated…

gst-nvinfer is designed to help to integrate TensorRT inferencing functions. And if the model can not be optimized, you may also use “interval” parameter of gst-nvinfer to skip some frames. Gst-nvinfer — DeepStream 6.3 Release documentation. This will miss the inferencing result for some frames. You can decide whether to use it or not according to your own requirement and tolerance. If the model is too heavy to achieve your goal, you need to compromise something.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.