Hello, I am a beginner in Gstreamer and nx. Here is a question
- I want to transform the H.265 byte-string to be a RGB frame(such as format in opencv) to do the apriltag detection . Firstly I use NVDEC to decode the h.265, but after I write the command line code… I dont know how to get the result frame…So I choose to decode then encode it to a rtph264 udpsink, and hope to get the udp video in opencv… The code is as follows:
gst-launch-1.0 fdsrc fd=0 ! “video/x-h265,width=1920,height=1080,framerate=30/1,streamformat=(string)byte-stream” ! h265parse config-interval=1 ! nvv4l2decoder ! nvvidconv ! “video/x-raw(memory:NVMM),width=960,height=540,framerate=30/1” ! nvv4l2h264enc bitrate=1000000 control-rate=1 preset-level=0 qp-range=15,30:5,20:-1,-1 ! rtph264pay pt=96 config-interval=1 ! udpsink host=127.0.0.1 port=5005
The first question is, can I immediately get the frame after the decoding and put it into opencv? how?
Since I have get the udp video 264 stream, I use the following python opencv code and try to get the frame.
import cv2
import gi
gi.require_version(‘Gst’, ‘1.0’)
from gi.repository import GObject, Gst
Gst.init(None)
#pipeline_str = “udpsrc port=5005 caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtph264depay ! h264parse ! nvv4l2decoder ! video/x-raw, format=BGRx ! videoconvert ! appsink sync=false”
pipeline_str = “udpsrc port=5005 caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! nvegltransform ! nveglglessink name=mysink sync=false”
pipeline = Gst.parse_launch(pipeline_str)
def on_new_sample(appsink):
sample = appsink.emit(“pull-sample”)
buf = sample.get_buffer()
caps = sample.get_caps()
height = caps.get_structure(0).get_value('height')
width = caps.get_structure(0).get_value('width')
img_array = buf.extract_dup(0, buf.get_size())
img = np.frombuffer(img_array, dtype=np.uint8).reshape((height, width, 4))[:, :, :3]
cv2.imshow("Video", img)
cv2.waitKey(1)
app_sink = pipeline.get_by_name(‘mysink’)
app_sink.set_property(“emit-signals”, True)
app_sink.set_property(“max-buffers”, 1)
app_sink.connect(“new-sample”, on_new_sample)
pipeline.set_state(Gst.State.PLAYING)
GObject.MainLoop().run()
But it seems nveglglessink object cannot get the video frame.
So the second question is : how to get the nveglglessink video frame?
Thank you very much and hope to get the reply