Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) AGX XAVIER • DeepStream Version 5.0 • JetPack Version (valid for Jetson only) 4.4 • TensorRT Version 7.0
We are trying to capture an event that occurs in the live stream within 5 seconds duration video using a dynamic pipeline with nvv4l2h264enc, nvvideoconvert and file sink.
In the main pipeline we add a tee that dynamically add a recording pipeline when we call start_recording function .
In the start_recording function we add and link the recording pipeline elements(queue-> nvvideoconvert->capsfilter->nvv4l2h264enc->h264parse->qtmux->filesink), then we call g_timeout_add_seconds to set a 5 seconds timer and stop recording by calling timeout_cb_0 (timer callback).
In the timer callback we add an IDLE probe on the tee src pad and unlink the dynamic pipeline elements by calling unlink_cb (idle probes callback)
In the idle probes callback we send an EOS event and remove, set null state and unref pipeline elements
Unfortunately, we noticed that the memory use increases after saving each video, do you have any advice to avoid this memory leak problem?
What kind of memory leak? Can you make sure where the leak come from? Is the leak related to any DeepStream plugin? Have you tried deepstream-app with smart recording? Is there any memory leak with deepstream-app?
It is hard to check memory leak with eyes.
Can you try the tool valgrind? (https://valgrind.org/Valgrind) We found it is a good tool to identify the place causing memory leak.
The following is a sample command line:
valgrind --tool=memcheck --leak-check=full --show-leak-kinds=all --track-origins=yes --log-file=valgrind-out_2.log
When I was using DS4.02 with the same logic I too had memory leaks. I found a number of ways to get around it in the following order:
created a wrapper python and used the subprocess module to start my deepstream app and also monitor system memory. When avail memory got below 10% I restarted the deepstream app. Restarts are pretty quick when you already have the nvinfer engine files generated.
avoid the source of the leaks by full time re-encoding h264/h265 and put your tee after this where you dynamically had a file muxer and filesink. Or alternatively put your recording tee before you decode your source - that way you don’t need to re-encode and therefore won’t get the memory leaks.
switch to using smart record so you can remove all the dynamic pipeline manipulation and simplify your code.
We need to record more than one video at the same time, which is not supported by Smart Record.
Regarding the second solution, we thy the the following pipeline:
nvstreammux → nvinfer → nvtracker → nvvideoconvert → capsfilter(video/x-raw, format=RGBA) → dsexample → nvvideoconvert → capsfilter(video/x-raw(memory:NVMM), format=(string)I420) → nvv4l2h264enc → h264parse → tee → queue → fakesink
but we get Segmentation fault (core dumped)
I’ve tried with DGPU and Jetson NX board with the following pipeline under /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app folder, there is no error.
To record a period of video only needs to change the encoder->muxer->filesink branch state and send EOS event. You don’t need to add/remove pads and elements during playback.
It is possible to record overlapped videos from the same source. E.g. using tee to link to two encoder->muxer->filesink bin, you can control the state and event for every bin separately. When the start/stop recording duration have overlap, the recorded videos have overlap. And this is also how DeepStream smart recording works.
You can control the “smart-rec-video-cache”, “smart-rec-duration” and “smart-rec-start-time” parameters together with "NvDsSRStart " and “NvDsSRStop” of every “smart recording bin” to make the video overlapped or not.
As I understood from the picture that the number of smart recording bins is associated with how many overlapped video I want to save at the same time, Is that right? If yes then it will not solve my problem since I’m recording 5 seconds of unlimited events occurring in the same source and I don’t know how many events will happen at the same time. Is there a dynamic way to generate the smart recording bin when multiple events are being recorded?
The answer for your first question is yes.
As to your requirement of randomly recording of random number of overlapped videos, there is no easy way to do this.