DeepStream 3.0 on Tesla P4 Performance Issue

I am using Tesla P4, TensorRT 5.*, DeepStream 3.0
I am running sample app with command

deepstream-app -c configs/deepstream-app/source30_720p_dec_infer-resnet_tiled_display_int8.txt

Initially it runs faster but with passage of time it slow down. I can see nvidia-smi, Volatile GPU-Util increases to 100%.

at [sink0] with type=2, sync=1 I can see following message on console

There may be a timestamping problem, or this computer is too slow.
WARNING from sink_sub_bin_sink1: A lot of buffers are being dropped.
Debug info: gstbasesink.c(2854): gst_base_sink_is_too_late (): /GstPipeline:pipeline/GstBin:processing_bin_0/GstBin:sink_bin/GstBin:sink_sub_bin1/GstEglGlesSink:sink_sub_bin_sink1:

Finally system hangs on and I can’t perform any action.

I tried another sample (tracker) screenshots are attached with 4 streams. Within less than 1 minute GPU-Util raised to 100% and warning appears.


I am facing the same issue as I get the same warnings .And I get a black video as output (out.mp4 is black) .I am using 1 gpu of Nvidia TeslaK80 and system config is ubuntu 16.04 LTS .
Please help

There were few things In my case,

  1. it was heating issue of GPU.
  2. When FPS of Video is greater than processing, it starts dropping frames and display warning.