The problem is i get 4 streams each averaging 30 fps combined 120 fps
I wanna run inference only on a single sample video (how do i change the number of sample)
Only one GPU is being utilised and that also only 15% ,fps don’t go over 30
i tried setting the sinks to Fake Sink and EglSink,dis-enabled the titled display,how do i maximise the fps possibly to 1000+ ?
Thank you,i had followed that ,but basically it bumped up the fps by 3 to 4 .so basically a stream caps at right around 30 fps , so a powerful GPU allows us to run multiple stream like 30 , 40. according to the computation power.
Can you please let me know,
1.if there any script to run at 1000 fps (inside deepstream)
2. yolo inference script, prebuilt model ?
3.how to run inference simultaneously on 30, 40 streams.
1.if there any script to run at 1000 fps (inside deepstream)
2. yolo inference script, prebuilt model ?
[amycao] You can run with multi streams, to reach your GPU computation capability.
3.how to run inference simultaneously on 30, 40 streams.
[amycao ] set multi streams in config file.
changing the num-sources doesn’t change the number of streams it still 4 , i want to know where is the line which causes the code to run 4 streams or 30 streams
I think the batch-size is where it reflects the no of streams a s file1 has 30 and file2 has 4,it doesn’t make much sense. Playing around with batch size in both source and config file did not give any fruitful result
[amycao] You can run with multi streams, to reach your GPU computation capability.
[amycao ] set multi streams in config file.
I cant see any Multi steams option in the config file.
Yes, you also need to change batch-size in pgie and streammux batch-size to the number of sources. set batch-size to number of sources in nvinfer element, will let GPU run inference computation simutaneously.
[tiled-display]
enable=0
rows=1
columns=6
width=1280
height=720
gpu-id=1
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0
[streammux]
gpu-id=1
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=60
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
Set muxer output width and height
width=1920
height=1080
#enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
If set to TRUE, system timestamp will be attached as ntp timestamp
If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
attach-sys-ts-as-ntp=1
config-file property is mandatory for any gie section.
Other properties are optional and if set will override the properties set in
the infer config file.
[primary-gie]
enable=1
gpu-id=1
model-engine-file=../../models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine #Required to display the PGIE labels, should be added even when using config-file #property
batch-size=60 #Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0 #Required by the app for SGIE, when used along with config-file property
gie-unique-id=1
nvbuf-memory-type=1
config-file=config_infer_primary.txt
Sorry, but i still don’t understand. When i look for Nvinfer there are multiple files named the same,do i have to edit one of these files. What is pgie and sgie, i am kinda lost.