Xavier NX Mem

Rewrite based on the deepstream-test1-app program. When the program is enabled, the model is not loaded, and the memory footprint is 1.4G; when the program is enabled to load the yolov5l (class=2) model, the memory footprint is 2.5G; when the program is enabled to load yolov5l ( When class=2) and yolov5l (class=80) models, the memory usage is 3.0G. Why is the memory growth of the third program not greater than 1.1G (2.5-1.4)?

**• Hardware Platform (Jetson / GPU)**Nano
• DeepStream Version5.0
**• JetPack Version (valid for Jetson only)**4.4
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi, Ray,
@RayZhang
Could you help to check the size of two onnx model/pytorch model?(yolov5l-class2 & yolov5l-class80)

Haoran

yolov5l-class2 onnx model size : 178M
yolov5l-class80 onnx model size : 180M

@RayZhang
Acutally,
You can see that the model is about 200M only, But deepstream need a great diversity of memory than that. Actually when deepstream initialize the model, There are a lot of other memory needed to be allocated by deepstream and tensorrt, Not only the model-weight & the model-hidden-layer activation. Some of them are allocated when you load your model, but not while first init deepstream , So, if you init two models in one deepstream-app , The second model don’t need to repeat init deepstream-needed buffers.