I want to use TX1 as MediaBox application with following functionalities.
Camera Preview on HDMI
Camera Video Recording on File
Recorded video playback
Streaming on Ethernet + WiFi.
Currently i am using gstreamer1.0 v1.2.4 pipeline to do all these. I can run individual function pipeline or pipeline with 2 or more functionalities.
But my requirement is on/off any functionalities in runtime on external event.
So for that, i have to add or remove any element of pipeline dynamically.
I searched on net for gstreamer SDK, python gstreamer, qt gstreamer and opencv gstreamer.
So which would be easy to learning and suitable for my application? or any other easier way for dynamic pipeline on external event?
I had looked at a similar use case with Input Preview, Streaming, Recording about half a year ago. So I can give you a few pointers at what I found out, hopefully it is useful to you too.
Generally I found that written documentation for GStreamer is sometimes hard to process, because there are so many different use cases and software versions which contrasts with the communication over mailing lists which is at times cut short. But for this topic there is actually some good documentation: [0]
There is also a nice repository of videos from GStreamer conferences: [1]
Including videos about dynamic pipelines, e.g.: [2]
And there is a nice blog post from a GStreamer developer: [3]
In general I found that working with dynamic pipelines requires listening to pad probes and calling a callback function at the right time to modify the pipeline. The general idea is that you want to make sure you don’t modify part of a pipeline, while there data is exchanged in this element. [3] There are EVENT pad probes and (blocking) BUFFER pad probes that can be used, but it is better to use the EVENT pad probes (see [4]).
If you dont want to implement all that on your own, you might consider Interpipes, but I dont have any experience with that (see [5] / [6]).
Indeed GStreamer dynamic pipelines is a very tricky business. If you don’t get it right, you can start getting odd effects or even worst a complete stall of the whole pipe.
At RidgeRun we have been working with these kinds of requirements long time now. To achieve the dynamic pipeline handling, we created an open source project which you can give a try. Its called GstInterpipe, and it achieves precisely that: independent pipeline control. It was designed a simple usage in mind.
Give it a try and don’t hesitate to ask if you need help.
Very interested in interpipes as well, but are there any example code or more tutorials than the wiki?
I’m writing an application for changing between two camera streams (+ maybe option to change resolution)
But can you tell some example how to use in GStreamer pipeline?
For example camera preview pipeline is running. So can i use this plugin for recording on/off on user event?
I think the problem is that when you use two separate gst-launch commands, they run in separate “contexts” and don’t know about each other. Therefore you must somehow add both pipelines to the same superordinate pipeline.
To do this you could either launch the two pipelines using the GStreamer API (e.g. in C or Python) and make sure to add them to the same overall Pipeline. Simplified example in Python:
Or alternatively you can use Gstreamer Daemon (from ridgerun as well [1]) to launch the two pipelines.
For example, this is how I can use interpipes with GStreamer Daemon:
You need to first run the GStreamer Daemon in the background.
This is the command I use on my installation (maybe you dont need the LD_LIBRARY_PATH environment variable):
I am also trying to launch the two pipelines using the GStreamer API in C.
I am new to Gstreamer API approach. I have taken gst-sdk example for gstreamer-1.0
Below is my code
#include <gst/gst.h>
int main(int argc, char *argv[]) {
GstElement *pipeline, *pipe1, *pipe2;
GstBus *bus;
GstMessage *msg;
GstStateChangeReturn ret;
/* Initialize GStreamer */
gst_init (&argc, &argv);
/* Create the elements */
pipe1 = gst_parse_launch ("videotestsrc ! interpipesrc name=camera", NULL);
pipe2 = gst_parse_launch ("interpipesrc name=src listen-to=camera ! autovideosink", NULL);
/* Create the empty pipeline */
pipeline = gst_pipeline_new ("test-pipeline");
if (!pipeline || !pipe1 || !pipe2) {
g_printerr ("Not all elements could be created.\n");
return -1;
}
/* Build the pipeline */
gst_bin_add_many (GST_BIN (pipeline), pipe1, pipe2, NULL);
if (gst_element_link (pipe1, pipe2) != TRUE) {
g_printerr ("Elements could not be linked.\n");
gst_object_unref (pipeline);
return -1;
}
/* Start playing */
ret = gst_element_set_state (pipeline, GST_STATE_PLAYING);
if (ret == GST_STATE_CHANGE_FAILURE) {
g_printerr ("Unable to set the pipeline to the playing state.\n");
gst_object_unref (pipeline);
return -1;
}
/* Wait until error or EOS */
bus = gst_element_get_bus (pipeline);
msg = gst_bus_timed_pop_filtered (bus, GST_CLOCK_TIME_NONE, GST_MESSAGE_ERROR | GST_MESSAGE_EOS);
/* Parse message */
if (msg != NULL) {
GError *err;
gchar *debug_info;
switch (GST_MESSAGE_TYPE (msg)) {
case GST_MESSAGE_ERROR:
gst_message_parse_error (msg, &err, &debug_info);
g_printerr ("Error received from element %s: %s\n", GST_OBJECT_NAME (msg->src), err->message);
g_printerr ("Debugging information: %s\n", debug_info ? debug_info : "none");
g_clear_error (&err);
g_free (debug_info);
break;
case GST_MESSAGE_EOS:
g_print ("End-Of-Stream reached.\n");
break;
default:
/* We should not reach here because we only asked for ERRORs and EOS */
g_printerr ("Unexpected message received.\n");
break;
}
gst_message_unref (msg);
}
/* Free resources */
gst_object_unref (bus);
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref (pipeline);
return 0;
}
For the gstDaemon approach:
I ran into a problem with “Out of memory” where it stopped after I think 10-20 seconds. But I could work around that when I used “xvimagesink sync=false” as video sink. (In order to use xvimagesink, you need to first “export DISPLAY=:0”)
Other than that, the pipeline should not stop after 25 seconds. It is not time limited.
Could you try these commands:
About the approach with GStreamer API:
The way I understand interpipes, you do not need to link the two pipelines. Only add them to the overall pipeline and run all (pipe1, pipe2, pipeline). This is exactly the advantage of using interpipes, you can just add a new pipeline and connect them with interpipesrc and interpipesink, no need to link them.
Thanks for all your replies. Its saved lot of my time.
Yes gstd is not time limited.
If i added 1080p resolution with autovideosink, its not stopping at 25 sec.
It may be some memory issue with previous pipeline.
I think you are on the right path, what you are looking for is adding bins to a pipeline: Bins
In your case pipe1 and pipe2 are bins, which you want to add to the pipeline. You dont need the function “gst_bin_add_many()”, since it is for adding elements to a bin, not for adding bins to a pipeline. You dont need to do this, since elements were already created, added and linked to pipe1, pipe2 by the “gst_parse_launch()” calls.
Hey, I just found that there are some C examples in the files that you used to compile interpipes. Have a look at “gst-interpipe/tests/check/gst/”.
A relatively simple example is “test_set_caps.c”. It looks like you dont even need to add the different bins (pipe1, pipe2) to an overall pipeline. Just create them with “gst_parse_launch()” and run them with “gst_element_change_state()”.
Glad to hear you got the test pipelines running. I’m going to add some examples to the project and documentation so users can use them as a reference. The code in tests/check/ is more oriented to test a specific feature and hence may not always be the best application code.
Anyway, now that you have this example working, were you able to get your application running? Did you encounter any other problems?
This only show 1 frame and stuck.
So how to play 2 separate pipelines in Gstreamer C application?
As i am new to this C application approach i am facing this difficulties.
But the problem is that First snap is taken OK. But if i trying to take snap second time the image file comes out empty.
So how to restart m_pipeline1 so that it takes and store snap every time.