Hardware Platform (Jetson / GPU) GPU (Tesla T4)
• DeepStream Version 5.0
• TensorRT Version 7.2.1
• NVIDIA GPU Driver Version (valid for GPU only) 460.32
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Run deepstream app with yoloV4 model with 12 camera rtsp streams.
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hi,
I am using Tesla T4, Running Deepstream with YoloV4 (80 classes).
Camera streams - 12 rtsp (1080p @25FPS).
Batch-size = 12 for both Streamux and primary-gie.
I want to check the load T4 can handle.
Here are few observations-
-
When i use “interval=0” in [primary-gie] group, then it does not run well, fps drops down to 2fps and lots of frame glitch appears. I am not sure if model is heavy and lots of frames incoming causing buffering issue or something else?
-
I updated “interval=12” in [primary-gie] group, (skipping 12 batches) then it ran well. Output fps =25, with some bbox trailing for detection(because of batch-skip).
Decoder load is in range of 35-40 %.
GPU load in in range of 50-60%. -
After 10 mins of run, nvidia-smi gives temp>=85 degrees,
Then all of sudden, GPU goes 100%, Decoder goes to 0% and then glitch appears. After that suddenly temps goes <85 degrees and everything runs fine for few seconds. And then this process repeats in every 10-15 seconds.
I am not sure if it is related to temperature? What should be the optimum temperature for T4 to work fine? I have kept the system in well mantained cool environment.
Thanks.