Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) : Jetson Orin Nx
• DeepStream Version : 6.3.0
• JetPack Version (valid for Jetson only) : 5.1.2
• TensorRT Version : 8.5.2.2-1+cuda11.4
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) : bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) : I’ll attach the sample code.
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I’m developing an application that uses deepstream to build a pipeline to infer video.
I noticed that the GPU memory was increasing as I removed and recreated the pipeline, and after a few iterations, the application would crash with a message that the GPU usage had reached 99%.
I narrowed down the scope of the problem and came to our own conclusion that the nvinfer element was not properly destroying memory.
Attached is a sample code that creates just the nvinfer element, sets it to PLAYING, then NULLs it, and repeats. I’ve set it to run every 5 seconds, and it shows a 10MB increase on each iteration.
I’m assuming it’s not being handled correctly by nvinfer, but I’d like to know if there’s a process I’m missing to free the memory.
CMakeLists.txt (879 Bytes)
dsserver_pgie_config.txt (3.5 KB)
The cpp code didn’t upload, so I’m posting it here.
//
// Created by moony on 24. 1. 31.
//
//#include <spdlog/spdlog.h>
#include <iostream>
#include <gst/gst.h>
GstElement *nvinfer = nullptr;
int count = 0;
gboolean creatAndDestroy(void* ptr) {
// spdlog::debug("{}", __func__);
std::cout << __func__;
if (count % 2 == 0) {
nvinfer = gst_element_factory_make ("nvinfer", "primary-nvinference-engine");
g_object_set(G_OBJECT(nvinfer),
"config-file-path",
"/tmp/jetson-streaming/infer-strm-svr/gpu-leak-only-element/dsserver_pgie_config.txt",
nullptr);
gst_element_set_state(nvinfer, GST_STATE_PLAYING);
} else {
gst_element_set_state(nvinfer, GST_STATE_NULL);
gst_object_unref(nvinfer);
}
count++;
return TRUE;
}
int main (int argc, char *argv[]) {
// spdlog::set_level(spdlog::level::debug);
gst_init(&argc, &argv);
auto mainLoop = g_main_loop_new(nullptr, FALSE);
g_timeout_add_seconds(5, (GSourceFunc)creatAndDestroy, nullptr);
g_main_loop_run(mainLoop);
return 0;
}
The location of the config-file-path that you set as a property to infer and the location of the model file, etc. within this file should be customized to your environment.
The situation where GPU memory is increased is through jtop.