Multi-pipeline vs Batch Processing

DeepStream Version - 7.0
Docker Image - nvcr.io/nvidia/deepstream:7.0-gc-triton-devel
GPU - NVIDIA A100-SXM4-40GB
NVIDIA GPU Driver - 535.183.01

I have a question, let’s say I have 2 deployment systems (both with detection and tracking plugins enabled) running on just one GPU card (assuming the card can process so many frames together) -

  1. One pipeline processing 100 distinct RTSP streams with a inference batch size 100
  2. 100 distinct pipelines processing 1 RTSP stream each with inference batch size 1

What will be the GPU processing and memory utilization in each case, compared subjectively?
Is system 1 expected to be better than system 2?

Please check with your detect model performance on batch size 1 and batch size 100

This means to run one batch size 100 detect model

This means to run 100 batch size 1 detect models

Depends on your model, and you need to measure the performance one batch size 100 detect model and 100 batch size 1 detect models

1 Like