Breaking the Boundaries of Intelligent Video Analytics with DeepStream SDK 3.0

Originally published at: Breaking the Boundaries of Intelligent Video Analytics with DeepStream SDK 3.0 | NVIDIA Technical Blog

A picture is worth a thousand words and videos have thousands of pictures. Both contain incredible amounts of insights only revealed through the power of intelligent video analytics (IVA). The NVIDIA DeepStream SDK accelerates development of scalable IVA applications, making it easier for developers to build core deep learning networks instead of designing end-to-end applications…

I am running sample (deepstream-app -c configs/deepstream-app/source30_720p_dec_infer-resnet_tiled_display_int8.txt) DeepStream 3.0 with Tesla P4. Initially it process videos faster, but with the passage of time its performance is going down. Using Nvidia-smi I can see increase in Volatile GPU-Util that reaches to 100%.
At [sink0] with type=2(EglSink) and sync=1 I see following message in console log

There may be a timestamping problem, or this computer is too slow.
WARNING from sink_sub_bin_sink1: A lot of buffers are being dropped.
Debug info: gstbasesink.c(2854): gst_base_sink_is_too_late ():
/GstPipeline:pipeline/GstBin:processing_bin_0/GstBin:sink_bin/GstBin:sink_sub_bin1/GstEglGlesSink:sink_sub_bin_sink1:

Tahir,
Can you please report this issue in the DeepStream forum on devtalk.nvidia.com (https://devtalk.nvidia.com/....
Our support team will be able to work with you on your specific use case and offer help.

Hi all, does TensorRT Inference Server (TRTIS) support running video analytics inference with DeepStream?. I read that DeepStream works with TensorRT optimized inference engines as input to run the inference with DeepStream, and I need to know if DeepStream works with TRTIS also.