Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
A10 dGPU on x86 platform, Ubuntu 20.04
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
Driver Version: 515.65.01
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Using Nvidia DS6.1 framework we have created a pipeline to perform some benchmarking .
RTSP stream–> decode–>batching–>detect(deepstream_yolo(yolo2-tiny))–>classify(ResNet18) -->filesink(fake)
While running multiple instance of this pipeline , we are able to spawn maximum of 22 or 23 due to dGPU memory limit of 24GiB.
using nvidia-smi command it is observed GPU(56%) and memory utilization(15%) with 23 instance.
Looks like GPU has power to run more instances, but due to memory usage limit , instances are failing after 23 with out of memory exception.
Using nvidia-smi command it is observed that, each pipeline instance is taking up 984MiB.
- Is there a way to increase the memory on A10 dGPU like using swap or addition physical memory?
- Can we tune any of the parameters particular to the above pipeline configuration to allocate less memory?
- Even though the each pipeline is taking ~1GiB space, overall memory utilization when 20+ instance are running is very low, so in this case is there a way to configure in deepstream to accumulate memory on demand by the instance?
- In one of the deepstream sample application (Triton) a configuration attribute “tp_gpi_mem_fraction” is being used , can we use similar configuration in Yolo models?.
Please advise is there any better way of performing memory allocation in the pipeline.
:nvbuf-memory-type=0 is been used in all of the configuration groups (where it is applied)