How to make GStreamer pipeline work using gi.repository in Python3 in headless server?

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)Jetson
• DeepStream Version 6.0
**• JetPack Version (valid for Jetson only)4.6.3
**• TensorRT Version8.2.1.32
• NVIDIA GPU Driver Version (valid for GPU only)
**• Issue Type( questions, new requirements, bugs)question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
**• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)GLib, Gst

I’m trying to create a GStreamer pipeline using the gi.repository from Python3 in a Jetson Nano connected by SSH. I want to make it this way instead of using deepstream-app or gst-launch-1.0 application because in the future I want to use Nvidia modules and execute a DNN to detect objects in the streaming video and I think that this way it will be easier to extract the metadata from the DNN inference.

Looking around I found this pipeline:

v4l2src device=/dev/video0 ! video/x-raw ! videoconvert ! x264enc ! h264parse config-interval=3 ! qtmux ! filesink location=video.mp4

It works perfect and create a .mp4 file from the video of my USB webcam when I execute it using gst-launch-1.0 application or using the function Gst.parse_launch("pipeline") from Gst in gi.repository.

But when I try to make the pipeline creating each element individually and linking them in a Python code, I can’t make it work. It creates a .mp4 file of size around 1 MB, althougth I can’t reproduce it. It returns the error 0xc00d36c4 in the Video app from Windows10 and nothing with VLC. It may be something related to the muxer element (qtmux or mp4mux), because I have seen other example that links the sink of the muxer to the source of the previous element in the pipeline. It has something to do with dinamic pads, but I don’t really understand when they are necessary.

I will leave parts of my code here, so you can guide me where the problem is.

The imports.

import sys
import gi
gi.require_version(‘Gst’, ‘1.0’)
from gi.repository import GLib, Gst
import time
import pyds

The creation of the pipeline and its elements.

Gst.init(None)
player=Gst.Pipeline.new(“player”)
print(“Pipeline created”)

v4l2Source=Gst.ElementFactory.make(“v4l2src”,“v4l2Source”)
v4l2Source.set_property(“device”, “/dev/video0”)
print(“USB cam source created”)

caps = Gst.Caps.from_string(“video/x-raw”)
filter= Gst.ElementFactory.make(“capsfilter”, “filter”)
filter.set_property(“caps”, caps)

videoconvert = Gst.ElementFactory.make(“videoconvert”, “converter”)

encoder = Gst.ElementFactory.make(“x264enc”,“venc”)

parser=Gst.ElementFactory.make(“h264parse”,“parser”)
parser.set_property(“config-interval”, 3)

muxer=Gst.ElementFactory.make(“mp4mux”,“muxer”)

filesink=Gst.ElementFactory.make(“filesink”,“sinker”)
filesink.set_property(“location”,“pvideo.mp4”)

print(“All elements created”)

i=0
for ele in [v4l2Source,filter,videoconvert,encoder,parser,muxer,filesink]:
print(f"Element {i} added to the pipeline")
player.add(ele)
i+=1
print(“All Elements added to the pipeline”)

The linking of every element. The error is probably in this part. As I mentioned before, the pipeline works if executed with gst-launch-1.0 or Gst.parse_launch("pipeline") .

v4l2Source.link(filter)
filter.link(videoconvert)
videoconvert.link(encoder)
encoder.link(parser)
parser.link(muxer)
muxer.link(filesink)
print(“All elements linked”)

And at last, the execution and stop of the pipeline. It records for around 20 seconds of footage before stopping the record.

player.set_state(Gst.State.PLAYING)
print(“Recording”)
time.sleep(20)
print(‘Sending EOS’)
player.send_event(Gst.Event.new_eos())
print(‘Waiting for EOS’)

Could you refer to our source code below?
https://github.com/NVIDIA-AI-IOT/deepstream_python_apps

I have read them. The code I made is based on the deepstream_test_1_usb.py, but there aren’t many examples that use a filesink element to create a .mp4 file.

My objective is to save the result of that .py file into a .mp4 file instead of showing it on the display, but the .mp4 file I create using the code above always creates corrupt files.

You can refer to our FAQ like below to save the mp4 file with python:
https://forums.developer.nvidia.com/t/deepstream-sdk-faq/80236/30

I found the problem.
I seems I need to wait for the EOS signal from the bus before closing the application. Thats why sometimes It create a .mp4 file with data, but it was corrupted.

Following the code of this post you just need to add at the end of the code this block and it should work.

bus = player.get_bus()
if bus.timed_pop_filtered(Gst.CLOCK_TIME_NONE, Gst.MessageType.EOS) is None:
    raise RuntimeError("Pipeline did not receive EOS message within the timeout period")

if player.set_state(Gst.State.NULL) == Gst.StateChangeReturn.FAILURE:
    raise ValueError("Unable to stop the pipeline")

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.