DeepStream Yolo Multiple Streams


I would like to know how I can run YOLO on multiple streams like the example in the deep stream SDK video, currently I can run it using a sample video or using one camera but I want to run the same model on multiple streams.

I am using AGX Xavier.

Is there any documentations that can help?

Change [source ] in deepstream_app_config_yoloVxxx.txt
Refer to 'samples/configs/deepstream-app/source30_1080p_dec_infer-resnet_tiled_display_int8.txt ’

 #Type - 1=CameraV4L2 2=URI 3=MultiURI
 # (0): memtype_device   - Memory type Device
 # (1): memtype_pinned   - Memory type Host Pinned
 # (2): memtype_unified  - Memory type Unified

Hello ChrisDing, Thank you for your answer.
I managed to run Yolov3 on deepstream fro 4 csi camera streams.
But the FPS is dropped to abou 7.5 fps for each stream?
Is there any solution to boost the FPS to be 30 fps for Yolo on the 4 streams

  1. Set inference “interval”

  2. Adjust batch size

  3. Enable int8

  4. Drop frames in decoder.

  5. Jetson xavier boost

    $ nvpmodel -m 0

    $ jetson_clocks

I’m working on a similar problem and my experience is quite comparable: custom YOLOv3 model with INT8 calibration achieving 28 fps on 1 stream and 7.5 fps on 4 streams. I’ve been working with the sample (/opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_Yolo/) on Xavier and got custom Yolo models to work by following instructions specified in:

What’s the procedure to adjust batch size ? Should I change the batch_size parameter in yolov3.cfg which is used by config_infer_primary_yoloV3.txt to build the TensorRT engine or deepstream_app_config_yoloV3.txt or both ?

can we change the image shape? for example for the peopleNet model we could input image of shape 224x120 instead of 960 X 544. Correct me if I am wrong, but I believe that would leave ample space in the memory, hence leading to an increase in fps

Hi 1.umairjavaid,

Please help to open a new topic with more details of your issue. Thanks