Getting message: A lot of buffers are being dropped.

Hello,

I am using Jetson Nano with Jetpack 4.2.1 fresh install.

I took pretrained SSD Inception v2 2018 model from Tensorflow model zoo repo.

Created engine file from Tensor RT samples sampleUffSSD by serializing engine.

trtModelStream = engine->serialize();
    ofstream p("./ssd_inception_v2.engine");
    p.write((const char*)trtModelStream->data(),trtModelStream->size());
    p.close();

In deepstream sample “deepstream-test3” my dstest3_pgie_config.txt is as per below:

[property]
gpu-id=0
net-scale-factor=0.0078431372
offsets=127.5;127.5;127.5
model-color-format=0
model-engine-file=/usr/src/tensorrt/samples/sampleUffSSD/ssd_inception_v2_2018_pretrained.engine
#model-engine-file=/usr/src/tensorrt/samples/sampleUffSSD/ssd_inception_v2_aws_18874.engine
#labelfile-path=../../../../samples/models/Primary_Detector/ssd_coco_labels.txt
labelfile-path=../../../../samples/models/Primary_Detector/coco_labels.txt
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=7
interval=0
gie-unique-id=1
is-classifier=0
output-blob-names=MarkOutput_0
custom-lib-path=/opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_SSD/nvdsinfer_custom_impl_ssd/libnvdsinfer_custom_impl_ssd.so
parse-bbox-func-name=NvDsInferParseCustomSSD

[class-attrs-all]
threshold=0.5
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

The problem is when I run the sample with mp4 video, for some seconds video runs at normal speed and then pauses and then runs at normal speed. Some Frame are being skipped.

Detections are okay.

To check inference time on each frame I took time difference b/w tiler_src_pad_buffer_probe and tiler_sink_pad_buffer_prob (Added by me).

I found time difference 14 to 25 milliseconds.

Hi,

Have you maximized the device performance first?
This is a complicated pipeline including decoder and inference.

Thanks.

Hello AstaLLL,

I maximized the device performance by firing below commands:

sudo nvpmodel -m 0
sudo jetson_clocks

I check again and found same scenario.

For some frames inference times goes as high as 9000 to 10,000 ms.

Are there any configuration parameters like batch sizes need to be configure?

Hi,

Set the batch size to 1 will be good.
Do you see the drop frame behavior after maximizing the performance?

Thanks.

Batch size is already 1.

No changes in frame behaviour after maximizing the performance

Hi,

Would you mind to check GPU loading on your environment with tegrastats first?

sudo tegrastats

If the GPU utilization already reach 99%, it’s recommended to set a larger interval value to nvinfer first.
The parameter indicates how often the inference applied. Ex, each frame(interval=0), bi-frame(interval=1), …
This will ease the GPU workload and improve the drop frames issue.

Thanks.