I am currently using a gstreamer pipeline where I am getting an rtsp stream from 2 different sources. I am creating a streammux to add both the stream in a single element and then pass it forward to pgie element for inference. I want to save the image before pgie element. How should I move forward with it?
Hello @shashank.gupta1,
It’s hard to propose something more complete without knowing your pipeline, however, I would suggest you use something along the lines of tee → videorate → multifilesink. The tee will allow you to split your pipeline into your current pipeline and the one you will be using to store files every 10 seconds, then the videorate will help you reduce framerate to 1 frame every 10 seconds, and finally the multifilesink will help you store the individual images into different files.
You might also need to add extra elements such as video converters or encoders if you need to change the buffer format before storing.
Would it be possible for you to share your pipeline and a bit more details on your requirements? We could help you with a pipeline suggestion tailored for your solution.
regards,
Andrew
Embedded Software Engineer at ProventusNova
Hi,
A possible solution is to link to appsink like:
... ! tee ! queue ! nvvideoconvert ! video/x-raw,format=I420 ! appsink
And decide to encode or drop the frame in appsink. Please refer to the samples:
How to turn off auto-capture when gstreamer pipeline runs? only on-demand - #5 by DaneLLL
Starvation (?) of gstreamer threads - #12 by DaneLLL
This is how my pipeline flow looks like:
streammux.link(pgie)
pgie.link(nvstreamdemux)
demuxsrcpad.link(queue_tracker_sinkpad)
queue_tracker.link(tracker)
tracker.link(nvvidconv)
nvvidconv.link(capsfilter1)
capsfilter1.link(nvosd)
nvosd.link(nvvidconv1)
nvvidconv1.link(capsfilter)
capsfilter.link(tee)
# RTSP Branch
tee.link(queue_rtsp)
queue_rtsp.link(encoder_rtsp)
encoder_rtsp.link(h264parser_rtsp)
h264parser_rtsp.link(rtppay)
rtppay.link(queue_udpsink)
queue_udpsink.link(udpsink)
# file branch
tee.link(queue_file)
queue_file.link(encoder_file)
encoder_file.link(h264parser_file)
h264parser_file.link(muxer)
I am saving an mp4 video preodically in my pipeline, all of this is working fine. Now I want to save an image of a frame from the 2 rtsp stream I am receiving before it gets inference by the pgie element. I want to use .get_static_pad method and work around that
Hello @shashank.gupta1,
Thanks for sharing extra details.
Ok, now I see, you are manually creating and linking the elements.
I assume you used a Deepstream sample app as a base.
Would it be possible for you to share the complete code to your app?
regards,
Andrew
Embedded Software Engineer at ProventusNova
Thank your for your reply. I was able to get the data from nvosd sink pad probe and save the image.
Hello @shashank.gupta1,
That is great, thanks for keeping us posted.
Please do not hesitate to reach out if you require further assistance.
regards,
Andrew
Embedded Software Engineer at ProventusNova
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.