Display video from jetson-inference Gstreamer pipeline with PyQt5 GUI

I want to use the GUI in my program based on jetson-inference. The GUI should perform the following functions: display the video in full screen mode, display control buttons over video, open the settings menu. I plan to use PyQt5 for this purpose.

How can I stream video from jetson-inference to the PyQt5 window (preferably without copying a frame from GPU memory to CPU memory)? As far as I understand, the gstDecoder module creates a Gstreamer pipeline with the output "...appsink name=mysink", then the display is produced by the gstDisplay module which creates its own window.

If this is not possible with PyQt5, are there other ways to do this?

I use Jetson Nano with JP 4.4 and Python 3.6.

Thanks in advance for your answers!

Hi,
you cloud send the camera images to Qt5 using TCP and capnProto

Hi,

jetson-inference use GStreamer framework.
So the approach to display GStreamer pipeline with PyQt should also work for jetson-inference too.

For example:

Thanks.

1 Like

I solved this problem.

This task has two parts:

  1. Displaying video from some Gstreamer pipline in a Qt window
  2. Transfer video from jetson-inference to Gstreamer external pipeline

Solution:

  1. This code plays a test Gstreamer video (videotestsrc) in a 640x480 window.

     import sys
     from threading import Thread
    
     from PyQt5.QtCore import *
     from PyQt5.QtGui import *
     from PyQt5.QtWidgets import *
    
     import jetson.utils
    
     import gi
     gi.require_version('Gst', '1.0')
     gi.require_version('GstVideo', '1.0')
     from gi.repository import Gst, GObject, GstVideo
     GObject.threads_init()
     Gst.init(None)
    
    
     class MainWindow(QMainWindow):   
         def __init__(self):
             QMainWindow.__init__(self)
             self.setAttribute(Qt.WA_AcceptTouchEvents, True)
             self.setGeometry(0,0,640,480)
             self.videowidget = VideoWidget(parent=self)
            
     class VideoWidget(QWidget):   
         def __init__(self, parent):
             QMainWindow.__init__(self, parent)
             self.windowId = self.winId()
             self.setGeometry(0,0,640,480)
    
         def setup_pipeline(self):           
             self.pipeline = Gst.Pipeline()
             self.pipeline = "videotestsrc ! video/x-raw,width=640,height=480 ! videoconvert ! xvimagesink"
             self.pipeline = Gst.parse_launch(self.pipeline)
             bus =  self.pipeline.get_bus()
             bus.add_signal_watch()
             bus.enable_sync_message_emission()
             bus.connect('sync-message::element', self.on_sync_message)
      
         def on_sync_message(self, bus, msg):
             message_name = msg.get_structure().get_name()
             print(message_name)
             if message_name == 'prepare-window-handle':
                 win_id = self.windowId
                 assert win_id
                 imagesink = msg.src
                 imagesink.set_window_handle(win_id)
                 
         def  start_pipeline(self):
             self.pipeline.set_state(Gst.State.PLAYING)
    
      app = QApplication([])
      window = MainWindow()
      window.videowidget.setup_pipeline()
      window.videowidget.start_pipeline()
      window.show()
      sys.exit(app.exec_())
    

2.1 In order to link two pipelines in one program, it is possible to use the intervideosink/intervideosrc Gstreamer module (maybe also udpsink/udpsrc, but I didn’t check it). As far as I understand, image transmission using intervideosink/intervideosrc does not require additional resources for encoding/decoding, unlike udp or rtsp.

Accordingly, the pipeline in the previous example should be like this:

intervideosrc channel=v0 ! xvimagesink

Pay attention to the parameter “channel”, it allows you to connect a certain pair of pipelines and can contain letters and numbers.

2.2. In turn, jetson-inference should create an output pipeline with intervideosink module. But the original version jetson-inference has the following output streams: RTP stream, Video file, Image file, Image sequence OpenGL window.

I modified the source code and added another type, “intervideo://0”, which creates a pipeline with intervideosink.

Video output looks like this:

jetson.utils.videoOutput('intervideo://0', argv=['--headless'])

Modified jetson-inference code attached.

Added:

  1. INTERVIDEO output stream.

Syntax:

jetson.utils.videoOutput('intervideo://*N*', argv=['--headless'])

Where N - channel number

Value format: channel number - int. Note: the channel name will be generated as vN, for example “intervideosink channel=v0”

  1. Additional parameters for RTSP input stream: latency, drop-on-latency, buffer-mode.

In my experience, the value of the “latency” parameter is important for working with real-time video.

SyntaŃ…:

Where IP - IP adress of RTSP stream, input-codec - codec (for RTSP usually h264/h265 ). These parameters are required.

Value format: latency - int, drop-on-latency - bool, buffer-mode - int. These parameters are optional. For more details see GStreamer documentation.

Changes were made to the following files:

gstDecoder.cpp
gstEncoder.cpp
videoOptions.cpp
videoOptions.h
videoOutput.cpp
URI.cpp

Changes are marked with the comments " CNVS project modification…"

Installation procedure: replace the files, copy and install the library according to the original manual.

JI_CNVS_project_mod.zip (42.8 KB)

1 Like