I want to use the GUI in my program based on jetson-inference. The GUI should perform the following functions: display the video in full screen mode, display control buttons over video, open the settings menu. I plan to use PyQt5 for this purpose.
How can I stream video from jetson-inference to the PyQt5 window (preferably without copying a frame from GPU memory to CPU memory)? As far as I understand, the gstDecoder module creates a Gstreamer pipeline with the output "...appsink name=mysink", then the display is produced by the gstDisplay module which creates its own window.
If this is not possible with PyQt5, are there other ways to do this?
2.1 In order to link two pipelines in one program, it is possible to use the intervideosink/intervideosrc Gstreamer module (maybe also udpsink/udpsrc, but I didn’t check it). As far as I understand, image transmission using intervideosink/intervideosrc does not require additional resources for encoding/decoding, unlike udp or rtsp.
Accordingly, the pipeline in the previous example should be like this:
intervideosrc channel=v0 ! xvimagesink
Pay attention to the parameter “channel”, it allows you to connect a certain pair of pipelines and can contain letters and numbers.
2.2. In turn, jetson-inference should create an output pipeline with intervideosink module. But the original version jetson-inference has the following output streams: RTP stream, Video file, Image file, Image sequence OpenGL window.
I modified the source code and added another type, “intervideo://0”, which creates a pipeline with intervideosink.