I got a problem when using deepstream to do multi-camera video analysis on Jetson Xavier NX.
The problem is the GPU usage will up to 99%, and the multi-camera debugging video will be very stuck and lagging, when I use deepstream to do multi-camera video analysis.
multi-camera means 2 or 3 RTSP cameras.
The GPU usage will be about average 40%-60% when doing single-camera (1 RTSP camera).
Official specification said Xavier NX can support 16 cameras video streams analysis, but why The GPU usage will be so high when only use 2 or 3 RTSP cameras?
Some details of my program are listed below:
The nvinfer part include pgie and tracker. The pgie is peoplenet34 of TLT pretrained models, and the tracker is libnvds_mot_klt.
Some screenshots after the program run are uploaded
By default we run ResNet10 to demonstrate 16-source inputs. Please run deepstream-app with this config file for a try:
In your case, it looks like the model is too heavy and GPU is overloaded for multiple sources. We suggest adjust interval in [primary-gie]. Please refer to the document:
DeepStream Reference Application - deepstream-app — DeepStream 5.1 Release documentation
Frequently Asked Questions — DeepStream 5.1 Release documentation
Hi DaneLLL, thanks a lot for your advices.
I’ve replaced the nvinfer model from Resnet34 (TLT-pretrained peoplenet model) to Resnet10 of my program, and the GPU usage was down to 30% - 50% by using 2 cameras, and the latency of 2 cameras were both lower than before and be acceptable.
BTW, I hope the next gen Jetson Xavier will have more powerful GPU (10 times than now).
We do have JETSON AGX XAVIER which can deliver 32 TOPS AI Performance, you might find more information here: AI-Powered Autonomous Machines at Scale | NVIDIA Jetson AGX Xavier
If you need more computation power, EGX platform can fulfill your requirement (10x AI performance), more information: EGX Platform for Accelerated Computing | NVIDIA
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.