• Hardware Platform (Jetson / GPU) : GPU • DeepStream Version : 7.0 • TensorRT Version : 8.6.1 • NVIDIA GPU Driver Version (valid for GPU only) : 535.171.04 • Issue Type( questions, new requirements, bugs) : questions
Hello,
I would like to ask for help in constructing a pipeline.
The main difficulty is that the input is a multifilesrc element, and another process moves files to the folder where we run the multifilesrc from (this is an external constraint, we can not change this sadly). This has a variable FPS between 7-30 FPS, but we do not know from the code if we get 7 FPS images or 30 FPS images (though the timestamps are available through the metadata of the images).
We have come up with 2 course of action, but both are unclear at the moment:
The input should run with the same FPS as the camera: if DeepStream somehow has a way of handling this variable FPS, and it can determine the needed framerate. Is there any way to do this?
The input should run with the maximum FPS it can, and as we have the timestamps, we will take care of the time that way. The problem is that if we run out of images in the input folder, we get an EOS event. We can catch that and instead of terminating the pipeline, we can pause it for a few seconds, and then resume running (implemented at the moment), but in this case the output video feed we generate will have the same sporadic manner of working, running with max FPS and then missing a few seconds, and so on, making it largely useless for debugging/feedback purposes. What would be a good way of solving this issue, and making the output feed continuous?
Framerate handling in GStreamer can be a bit tricky. You need to specify the framerate in the caps, but if you set a 30fps framerate and your element is only producing 10fps, the pipeline won’t fail. However, for nvstreammux, it’s important that all sinkpads have the same framerate to ensure smooth operation. Here are my suggestions:
Set the multifilesrc branch to match the maximum framerate expected on the video branch. This avoids delays waiting for a buffer when the video is running at the maximum framerate.
Add dropping queues to both branches to prevent over-buffering: queue max-size-buffers=1 leaky=downstream.
To avoid multifilesrc reaching EOS, set the loop=true property.
Optionally, add videorate to both branches to ensure a constant framerate. This element will duplicate buffers to match the framerate specified in the output caps.
I have some questions regarding your answers:
2. I can not drop frames, as each of them is important, so this does not seem feasible.
3. All frames are to be used only once (we will delete them as well), so looping is not useful sadly.
4. Adding videorate to the output branch sounds good, but when we pause the pipeline because the input ran into EOS, and we need to wait for input images, my understanding is that the whole pipeline pauses as the Gst.State.NULL message/state runs through the chain (which I send while waiting after an EOS)
Is there any way to decouple the output branch/feed from this pipeline, and add a large enough queue so that even when we pause to wait for new input, the output can still take the elements in the queue, oversample them with videorate (in case of 7 fps input), and provide a continuous stream?
Does this mean that we should read the data in a way to make sure that we do not run out of input frames (so we manipulate the input framerate with respect to the remaining input frames)?
Will the rest of the pipeline run with such arbitrary, ever-changing FPS?
Regarding the discussion with Miguel, is there any way to decouple the output branch/feed from this pipeline, and add a large enough queue so that even when we pause to wait for new input, the output can still take the elements in the queue, oversample them with videorate (in case of 7 fps input), and provide a continuous stream? [I think this would be the ideal solution.]
I agree that using appsrc in your case and handling all the read logic for your specific scenario seems like the best approach to gain more control over each of the buffers you are sending.
Regarding the discussion with Miguel, is there any way to decouple the output branch/feed from this pipeline, and add a large enough queue so that even when we pause to wait for new input, the output can still take the elements in the queue, oversample them with videorate (in case of 7 fps input), and provide a continuous stream? [I think this would be the ideal solution.]
Yes, there is a way. You could achieve this with interpipes, queues, and videorate:
This pipeline will work at 7fps or 30fps, and the DeepStream and video pipelines won’t receive EOS when the multifilesrc ends. The only issue is that you would need to manually monitor the multifilesrc pipeline for EOS and restart or pause the pipelines until more files are available.
Another option, which is likely simpler than building the appsrc logic from scratch, is to modify multifilesrc to include your custom logic. You can add a timeout to wait for new images before sending EOS, and then use videorate on the display side to maintain 30fps by duplicating buffers.
Ahh, this interpipesink/interpipesrc pair looks exactly like what we’d need, thank you!
We have already been doing this, so this is not a problem.
How can I modify multifilesrc? At the moment I have only attached an event probe to the output pad of it, and use that to intercept EOS messages, wait for new ones and restart the pipeline.
Checkout to the version that matches the GStreamer version on your board.
(Optional) Modify meson.build to build only gst-plugins-good and multifilesrc.
Modify the file subprojects/gst-plugins-good/gst/multifile/gstmultifilesrc.c to add your changes.
(Optional) Rename the plugin to have the original plugin available. This is a simple replace all multi_file_src, multifilesrc, MULTI_FILE_SRC, MultiFileSrc with a new name.
Thank you very much for your well-detailed and kind help Miguel!
I am going to accept your interpipe-related answer as solution, and go ahead with the development.
Hi @miguel.taylor,
I am sorry for the late reply, but could you tell me:
Why do we even need the capssetter element? I was under the impression that the videorate by itself will upsample the stream for the display sub-pipeline.
Why did you put the capssetter before the DS part of the pipeline?
Why don’t we need queues after each interpipesrc?
Also, we have been starting to use appsrc as you have recommended.
This is optional. I’m not sure if the change in framerate in multifilesrc will trigger a caps event, and currently, in DeepStream, nvstreammux crashes if it receives a caps event during execution. If the pipeline works without it, you can remove it.
Same as before, it’s a safeguard to catch any caps event before it reaches DeepStream.
GStreamer element pads will queue buffers up to a certain point, even without an explicit queue. The main purpose of the queue I added was to decouple multifilesrc from the rest of the pipeline. You can always add more queues if you like; queues typically improve performance by increasing parallelism in the pipeline, and they rarely cause issues since they are simple. The only trade-off is that they tend to increase buffer latency.
I have tried moving these elements around, omitting the capssetter, using renegotiation=True and False for interpipesrc, but it does not work. I have 3 test results: sometimes I get error -5 (datastream error), sometimes -4 (negotiation error), and sometimes -4 in the first pipeline with the first pipeline working correctly after…
Do you have any advice on how to continue, what to look for and how to debug this?
Thank you!
EDIT:
As you can see, I do not need the first pipeline to run with fix FPS, I just would like to get the second pipeline to start duplicating frames and throttling the input so that it has a fix 20 fps.
The error -5 ca be triggered by a caps change before DeepStream.
Some changes you can try on your pipeline:
The capssetter is not needed there, you can remove it.
Add queues (default configuration) before and after videorate for stability.
You may be missing an nvvideoconvert between videorate and nvh264enc, which could be triggering the negotiation error. It could also be elsewhere in the pipeline, but I can’t tell without seeing the full pipeline.
Try with allow-renegotiation=false in interpipesrc.
Would it help if I could somehow directly send you the pipeline?
Also, is there a way of reliably debugging pipelines that reach this level of complexity, and upon adding something new, they stop working with a quite cryptic message, like “-5, Internal data stream error”? I’d like to learn how to do that instead of always nagging you for help :D
Yes, having the full pipeline on my side for quick testing would be helpful. Pipeline debugging typically requires some trial and error, and I can suggest the changes I usually make when troubleshooting these kinds of issues. However, without access to the full pipeline, I can’t guarantee that it will work.
First things first, we usually extract the pipeline from the application to debug it using gst-launch-1.0. If you’re already using gst-launch, you can ignore this, but if not, I highly recommend moving your pipeline to the gst-launch syntax for debugging.
After that, the first step would be to enable GST_DEBUG at the ‘WARNING’ level to check if any element is reporting an error that can be fixed.
GST_DEBUG=2 gst-launch-1.0 ...
We have a wiki page that provides an in-depth guide on GStreamer debugging if you’re interested in more advanced tools: