Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) A4000 & T4
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only) 515.65.01
• Issue Type( questions, new requirements, bugs) bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
We are running python app for inferencing and below is our observations.
We are using python deeepsteam GRPC client and we have observed in single deepstream pipeline with 10 or more sources even we have GPU available we are not able to utilise same and we did see frame are being dropped but GPU utilisation does not go up, GPU utilisation stick to ~50%.
If we split sources to max 8 sources in one pipeline and run multiple pipelines on same DeepStream container, we see it utilises the GPU up to maximum capacity and we do not see frame drop until GPU is exhausted,
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)