Display video from jetson-inference Gstreamer pipeline with PyQt5 GUI

I solved this problem.

This task has two parts:

  1. Displaying video from some Gstreamer pipline in a Qt window
  2. Transfer video from jetson-inference to Gstreamer external pipeline

Solution:

  1. This code plays a test Gstreamer video (videotestsrc) in a 640x480 window.

     import sys
     from threading import Thread
    
     from PyQt5.QtCore import *
     from PyQt5.QtGui import *
     from PyQt5.QtWidgets import *
    
     import jetson.utils
    
     import gi
     gi.require_version('Gst', '1.0')
     gi.require_version('GstVideo', '1.0')
     from gi.repository import Gst, GObject, GstVideo
     GObject.threads_init()
     Gst.init(None)
    
    
     class MainWindow(QMainWindow):   
         def __init__(self):
             QMainWindow.__init__(self)
             self.setAttribute(Qt.WA_AcceptTouchEvents, True)
             self.setGeometry(0,0,640,480)
             self.videowidget = VideoWidget(parent=self)
            
     class VideoWidget(QWidget):   
         def __init__(self, parent):
             QMainWindow.__init__(self, parent)
             self.windowId = self.winId()
             self.setGeometry(0,0,640,480)
    
         def setup_pipeline(self):           
             self.pipeline = Gst.Pipeline()
             self.pipeline = "videotestsrc ! video/x-raw,width=640,height=480 ! videoconvert ! xvimagesink"
             self.pipeline = Gst.parse_launch(self.pipeline)
             bus =  self.pipeline.get_bus()
             bus.add_signal_watch()
             bus.enable_sync_message_emission()
             bus.connect('sync-message::element', self.on_sync_message)
      
         def on_sync_message(self, bus, msg):
             message_name = msg.get_structure().get_name()
             print(message_name)
             if message_name == 'prepare-window-handle':
                 win_id = self.windowId
                 assert win_id
                 imagesink = msg.src
                 imagesink.set_window_handle(win_id)
                 
         def  start_pipeline(self):
             self.pipeline.set_state(Gst.State.PLAYING)
    
      app = QApplication([])
      window = MainWindow()
      window.videowidget.setup_pipeline()
      window.videowidget.start_pipeline()
      window.show()
      sys.exit(app.exec_())
    

2.1 In order to link two pipelines in one program, it is possible to use the intervideosink/intervideosrc Gstreamer module (maybe also udpsink/udpsrc, but I didn’t check it). As far as I understand, image transmission using intervideosink/intervideosrc does not require additional resources for encoding/decoding, unlike udp or rtsp.

Accordingly, the pipeline in the previous example should be like this:

intervideosrc channel=v0 ! xvimagesink

Pay attention to the parameter “channel”, it allows you to connect a certain pair of pipelines and can contain letters and numbers.

2.2. In turn, jetson-inference should create an output pipeline with intervideosink module. But the original version jetson-inference has the following output streams: RTP stream, Video file, Image file, Image sequence OpenGL window.

I modified the source code and added another type, “intervideo://0”, which creates a pipeline with intervideosink.

Video output looks like this:

jetson.utils.videoOutput('intervideo://0', argv=['--headless'])