How to dynamically add remove filesink

Yeah, we’re waiting for DS5.0 to go GA and get its kinks worked out before we migrate our deployed DS 4.0.2 app, but this is great starting point in the meantime - we’ll give it a shot! Thanks!

hi jasonpgf2a, we’ve been busy and haven’t had a chance to take a closer look at this until now. considering we want the recording of the same output on the tiled display of all sources together, where would we put the tee element in this case?

thanks!

After the tiler.

gotcha, and so similarly, we’ll have one permanent fakesink for that tee to finalise since we don’t have anything else that runs everytime and then whenever we need to record, we just hook on to this tee?

thats right…

Hi @jasonpgf2a,I’m using deepstream-4.0.2 and met the same problem when periodically add or delete sources to deepstream pipeline during runtime. After 40+ add and 40+ delete there has cuda failure (memory leak.) Have you solved the problem and do you know deepstream 5.0 solve the memory leak problem or not? Thank you!

Hi @weiweifu, I solved the problem by just having a python control script.
In the python script I launch my deepstream app and then continually monitor free memory on the system. When it gets down below 10% I just restart the deepstream app.

If you have engine files configured then the restart is very quick.

I have also noticed that in DS5.0dp the memory leaks are far less. I still have a sneaking suspicion that there is a small leak but small enough not to worry about now. ;-)
I have not done a detailed analysis with valgrind - don’t have a desktop Ubuntu system to play with at the moment.

Thank you so much! It really helps a lot! Wish you have a good day!

Thank you for sharing your knowledge!

I did it and it worked.
But I am also trying to save the audio and for some reason the branch is hanging.

hi jasonpgf2a, we got the dynamic tee working based on your recommendations here. However, today, the message forwarding of the EOS stopped working - have you seen this and do you have any idea what might be causing the bus callback from not being called?

Also, without moving to DS 5.0 and using smart recording feature, do you have any suggestions on how to achieve some type of buffering so dynamic recording actually starts a few seconds from the time it gets triggered?

Thanks again for all your help!

re: eos not working - haven’t seen that problem on DS4 or 5.0.

Re: SR buffer. Print out a pipeline graph and you will see how nvidia do it. It looks to be just a normal gstreamer queue with appropriate settings.

Thanks once again for your feedback!

We got past the eos issue - code was tweaked that changed where the message forwarding was set which caused the issue. However, now, we’re running into this issue when re-initiating a new recording after the initial one got triggered and persisted to mp4 file:

Cuda failure: status=700
nvbufsurface: Error(-1) in releasing cuda memory

Have you ran across these errors before? So, basically, a detection triggers a recording and it completes successfully (mp4 is playable), and then another detection triggers the recording but now, the errors above come up.

I haven’t seen that error either. Are you unlinking the pipeline - setting the state and then removing all the dynamic elements?

Yes, we set state of each element, remove them and then de-reference the objects.

It was the way the dynamic elements are being created and cleaned up that caused the issue - it’s been resolved.

Another question on this - when video streaming source ends abruptly - e.g. camera fails streaming or end of video file in testing - the encoder throws an error and we’re not able to finalize the recording and cleanup properly. how did you manage to handle this gracefully if you’re in the middle of recording and this happens? with the above approach, it’s bombing out with encoder errors in the bus (after the fact) and file is not playable.

Because deepstream 4 had pretty bad memory leaks when dynamically adding/removing encoders I created a wrapper python program that would use the subprocess module to launch the deepstream app and monitor it. It would check available memory and whenever it got below 10% the python controller app would restart the deepstream app.

Likewise if the deepstream app crashes for any reason the python app will know and can restart it.

What I have found with deepstream 5.0 is that if 1 source of 4 (for example), dies. A warning message is issued but the pipeline contnues to run. If another camera dies the same - the pipeline continues to run. But when that last camera dies then the deepstream app exits. My python controller will attempt to restart it. If this fails 3 times in quick succession I send an error message to the user (my apps are controlled by a mobile phone app).

So because of this I have never bothered wit looking into runtime source/deletion. If I was to add a new source - my pyhhon controller simply stops the deepstream app - configures the new soruce - then starts the deepstream app. It works out very simple and does not convolute your deepstream app with 20 additional probes and all the nonsense required for runtime source addition/removal! ;-)

I have just tested this by unplugging the cameras. When an rtsp camera fails in real life it may not be so graceful but I’ve never seen it happen yet. I have been using Hikvision cameras.

hi jasonpgf2a i have a question

I’am doing to work based on your sample source.

According to this topics, memory leaks occur when the element is dynamically removed or added.

I have to use version 4.0 for internal reasons, but can’t I solve the memory leak without 5.0 migration? Or is there another solution?

Sorry not sure what you mean ? You have to use 4 or 5? If 5 I would just use smart record.
If 4 then you’ll manually have to manipulate the pipeline dynamically.

look at the previous comments,

Because deepstream 4 had pretty bad memory leaks when dynamically adding/removing encoders I created a wrapper python program that would use the subprocess module to launch the deepstream app and monitor it. It would check available memory and whenever it got below 10% the python controller app would restart the deepstream app.

Likewise if the deepstream app crashes for any reason the python app will know and can restart it.

What I have found with deepstream 5.0 is that if 1 source of 4 (for example), dies. A warning message is issued but the pipeline contnues to run. If another camera dies the same - the pipeline continues to run. But when that last camera dies then the deepstream app exits. My python controller will attempt to restart it. If this fails 3 times in quick succession I send an error message to the user (my apps are controlled by a mobile phone app).

So because of this I have never bothered wit looking into runtime source/deletion. If I was to add a new source - my pyhhon controller simply stops the deepstream app - configures the new soruce - then starts the deepstream app. It works out very simple and does not convolute your deepstream app with 20 additional probes and all the nonsense required for runtime source addition/removal! ;-)

I have just tested this by unplugging the cameras. When an rtsp camera fails in real life it may not be so graceful but I’ve never seen it happen yet. I have been using Hikvision cameras.

I have to proceed with development based on version 4.0.
so I wonder if you have found a solution to that memory leak.

No sorry. There is no solution. NVIDIA mentioned to me at the time the issue was in the v4l2 code which nvidia is not in control of.

I would still use a wrapper program as it allows you to monitor the deepstream app and restart it if there are issues.

So if you are not dynamically adding and removing the recording elements too much it might not be such a big issue either.

You could test keeping the encoder running full time and only dynamically adding/removing the muxer and filesink as well.