I have already used DeepStream SDK 4.0.1,but it doesnāt work.so i set batch-size to 25 in file source30_1080p_dec_infer-resnet_tiled_display_int8.txt,then it work.but the FPS canāt reach 30.
I also ran into a similar problem! I am running the latest deepstream 4.0.1 sdk
Everything is fine When I run: deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
I took the config file, and modified it to display 1x1 instead of 4x4, and set the source to be an rtsp stream coming from an ip camera. I do see frames from the ipcamera, but they are very slow. And I get the errors: **PERF: FPS 0 (Avg)
**PERF: 93.95 (93.95)
**PERF: 7.46 (8.57)
WARNING from sink_sub_bin_sink1: A lot of buffers are being dropped.
Debug info: gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline/GstBin:processing_bin_0/GstBin:sink_bin/GstBin:sink_sub_bin1/GstEglGlesSink:sink_sub_bin_sink1:
There may be a timestamping problem, or this computer is too slow.
WARNING from sink_sub_bin_sink1: A lot of buffers are being dropped.
Debug info: gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline/GstBin:processing_bin_0/GstBin:sink_bin/GstBin:sink_sub_bin1/GstEglGlesSink:sink_sub_bin_sink1:
There may be a timestamping problem, or this computer is too slow.
Iām not sure whatās happening. Could someone please give me some insight?
Below is a snippet of my modifications to the config file.
[i][b][application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl
[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0
I have the same issue. I also get the message āā¦ A lot of buffers are being dropped ā¦ There may be a timestamping problem, or this computer is too slow.ā
However, I have noticed that only happens when I decode an IP camera using ārtspā, I have tried several brands, FPS rate, and resolutions (HD,FHD). In the examples ādeepstream-test3-app (c++)ā and ādeepstream-test3 (python3)ā the same thing happens (with default configuration files).
But, when I run these same examples with a video file (file: ///ā¦/sample_720p.mp4 [or h264]), it works like a charm, I get 30 fps.
I get these same warnings a lot using various network sources, whether rtsp or youtube or whatever. The streams work, maybe dropping a frame here and there, but the spam is more annoying. Is there any way to suppress these warnings (a property on the sink, for example).
In gstreamer frameworks, there is mechanism of doing synchronization according to frame timestamps, it may block and triggers the warning messages, please refer to comment 11#
Thanks. I was browsing the documentation on that earlier today. The problem is that if I set sync to false, it breaks playback from files and some other sources like youtube uris (they play as fast as they can).
Is there a way to instruct a sink to drop frames if necessary to resync like maybe with max-lateness? I havenāt experimented with it yet. I should mention iām using nvmultistreamtiler before my sink. My pipeline is basically [uridecodebinā¦] ! nvstreammux ! nvmultistreamtiler ! nvoverlaysink
I mention because I am wondering: does nvmultistreamtiler sync in order to join the frames? If so, how and what happens if the timecodes between the streams begin to drift? I want to add raspberry pi sources to test as well and that board lacks a rtc, leading to sync issues with clients. I havenāt tested yet, but I can imagine having some problems.
Thanks for replying, I forgot to mention that by transmitting ārtspā, I only got 1 or 2 FPS maximum. But adding āstreammux.set_property (ālive-sourceā, 1)ā in the file ādeepstream-test3.pyā, everything works correctly.
I encountered the same problem when I run ādeepstream-app -c source30_1080p_dec_infer-resnet_tiled_display_int8.txtā.
following the infomation mentioned hereļ¼ I set āsync=0ā and get about 25fps,less than 30fps.
My development environment isļ¼
Intel(R) Coreā¢ i7-7800X CPU @ 3.50GHz
GeForce RTX 2080Ti x 2
Ubuntu 18.04
DriverL418.87.00
CUDA10.1
tonsorrt6.0.1
cudnn7
deepstream4.0.2
deepstream-app 4.0.2
the GPU-Util is only 18%ļ¼Where could the problem beļ¼
when I enable sink2 to RTSP streaming,the fps drops to about 5 fps below and the GPU-Util is only 3%.
|===============================+======================+======================|
| 0 GeForce RTX 208... Off | 00000000:17:00.0 Off | N/A |
| 47% 49C P2 57W / 250W | 3333MiB / 10989MiB | 3% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce RTX 208... Off | 00000000:65:00.0 On | N/A |
| 39% 37C P8 26W / 250W | 1583MiB / 10986MiB | 1% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 22006 C python3 631MiB |
| 0 29399 C deepstream-app 2564MiB |
| 1 1212 G /usr/lib/xorg/Xorg 40MiB |
| 1 1523 G /usr/bin/gnome-shell 59MiB |
| 1 2581 G /usr/lib/xorg/Xorg 675MiB |
| 1 2732 G /usr/bin/gnome-shell 371MiB |
| 1 3163 G ...AAAAAAAAAAAAAAgAAAAAAAAA --shared-files 294MiB |
| 1 22561 G ...uest-channel-token=12432704521303380480 127MiB |
+-----------------------------------------------------------------------------+
here is my config file:
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl
[tiled-display]
enable=1
rows=5
columns=6
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0
[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=file://../../streams/sample_1080p_h264.mp4
num-sources=15
#drop-frame-interval=2
#num-extra-surfaces=15
gpu-id=0
# (0): memtype_device - Memory type Device
# (1): memtype_pinned - Memory type Host Pinned
# (2): memtype_unified - Memory type Unified
cudadec-memtype=0
[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=file://../../streams/sample_1080p_h264.mp4
num-sources=15
#drop-frame-interval=2
#num-extra-surfaces=15
gpu-id=0
# (0): memtype_device - Memory type Device
# (1): memtype_pinned - Memory type Host Pinned
# (2): memtype_unified - Memory type Unified
cudadec-memtype=0
[sink0]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0
[sink1]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
sync=0
#iframeinterval=10
bitrate=2000000
output-file=out.mp4
source-id=0
[sink2]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
sync=0
bitrate=4000000
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400
[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0
[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=30
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
model-engine-file=../../models/Primary_Detector/resnet10.caffemodel_b30_int8.engine
#Required to display the PGIE labels, should be added even when using config-file
#property
batch-size=30
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
#Required by the app for SGIE, when used along with config-file property
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary.txt
[tests]
file-loop=0
I havenāt looked at the pipeline of the DeepStream reference app, but in my case I am going to put my network/file sources/sinks in their own threads by using various queue elements before and after my inference Bin.
I am thinking a multiqueue before my stream muxer and queues before my network/file sinks. I donāt know if you can do that with the reference app without modification. I will update if the approach helps with this issue.
Edit: so it seems uridecodebin (or rather decodebin) creates a multiqueue as it is, so it wouldnāt help much. gonna try on the sink.