Use deepstream to show that the tlt faster-rcnn algorithm has skipped frames

When I use deepstream to deploy the tlt faster-rcnn model to detect and display the video in real time, the video has skipped frames and no detection frame is displayed.
./deepstream-custom -c pgie_frcnn_tlt_config.txt -i /home/ubuntu/samba_file/opt/nvidia/deepstream/deepstream-5.0/samples/streams/test.h264 -d.

Hi,

Could you add the following component to the pipeline to check if any detected generated first:

Ex. source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
gie-kitti-output-dir=/home/nvidia/

Thanks.


There is no difference after I run it. No files are output in /home/ubuntu/.

When I do not use the -d command, I can generate the out.h264 video file. There are many normal detection frames in the video file, but when I use -d, there will be frame skipping and freezing. Is it because the fps is too low? ?

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Hi,

It is possible, since to inference a 1920x1080 model requires lots of GPU resources.
Which platform do you use?

Thanks.