Hi,
I am experimenting deepstream inference with multiple cameras with Yolov8 S model in jetson nano
When I used one camera/stream - I get 25fps
when i add another camera/stream[ testing with 2 stream ] - I get 21 fps for 1st camera and 5.5 fps for 2nd camera
Here, I am not understanding clearly, why the 1st camera has more fps and 2nd camera has low fps rate, could you please help me understand why this difference b/w these two stream fps in deepstream
I could not saved the fps, but the output looks like this [FOR 2 STREAM]
**PREF 21.24(21.11) 5.10(5.5)
• Hardware Platform (Jetson / GPU) - Jetson Nano Dev Kit 4GB
• DeepStream Version - 6.0.1
• JetPack Version (valid for Jetson only) - 4.6.1
• TensorRT Version - 8.2.1.8
• CUDA Version - 10.2.300
• cuDNN Version - 8.2.1.32
thanks
How did you run your case? With which app? What is the configuration of the important elements in your DeepStream pipeline(nvstreammux, nvinfer, sink,…)?
What kind of camera are you working with? What is the camera video’s properties(FPS, resolution, format,…)?
DeepStream is a SDK but not a application. Please provide details of your implementation.
@Fiona.Chen , i used deepstream-test5
app,
[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://admin:admin123_@192.168.20.95:554
#num-sources=2
gpu-id=0
nvbuf-memory-type=0
[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM - Custom schema payload
msg-conv-payload-type=1
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_redis_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=0.0.0.0;6379;deepstream_topic
topic=deepstream_topic
#Optional:
msg-broker-config=./configs/cfg_redis.txt
#msg-broker-config=../../deepstream-test4/cfg_kafka.txt
msg-conv-frame-interval=1
[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=4
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1
[primary-gie]
enable=1
gpu-id=0
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;1;1;1
bbox-border-color3=0;1;0;1
nvbuf-memory-type=0
interval=0
gie-unique-id=1
#model-engine-file=../../../../../samples/models/Primary_Detector/model_b1_gpu0_fp16.engine
model-engine-file=/opt/nvidia/deepstream/deepstream-6.0/samples/models/Primary_Detector/yolov8n_320_b1_redis.pt.onnx_b1_gpu0_fp16.engine
#../model_b1_gpu0_fp16.engine
labelfile-path=../../../../../samples/models/Primary_Detector/labels.txt
config-file=../../../../../samples/configs/deepstream-app/config_infer_primary_yoloV8.txt
#infer-raw-output-dir=../../../../../samples/primary_detector_raw_output/
both the cameras are having this properties - the camera fps is 25, resolution is 1920*1080 and it is H265
thanks for writting back