Gstreamer in nx

Hello, I am a beginner in Gstreamer and nx. Here is a question

I want to transform the H.265 byte-string to be a RGB frame(such as format in opencv) to do the apriltag detection . Firstly I use NVDEC to decode the h.265, but after I write the command line code… I dont know how to get the result frame…So I choose to decode then encode it to a rtph264 udpsink, and hope to get the udp video in opencv… The code is as follows:

gst-launch-1.0 fdsrc fd=0 ! “video/x-h265,width=1920,height=1080,framerate=30/1,streamformat=(string)byte-stream” ! h265parse config-interval=1 ! nvv4l2decoder ! nvvidconv ! “video/x-raw(memory:NVMM),width=960,height=540,framerate=30/1” ! nvv4l2h264enc bitrate=1000000 control-rate=1 preset-level=0 qp-range=15,30:5,20:-1,-1 ! rtph264pay pt=96 config-interval=1 ! udpsink host=127.0.0.1 port=5005

The first question is, can I immediately get the frame after the decoding and put it into opencv? how?

Since I have get the udp video 264 stream, I use the following python opencv code and try to get the frame.

import cv2
import gi
gi.require_version(‘Gst’, ‘1.0’)
from gi.repository import GObject, Gst
Gst.init(None)

#pipeline_str = “udpsrc port=5005 caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtph264depay ! h264parse ! nvv4l2decoder ! video/x-raw, format=BGRx ! videoconvert ! appsink sync=false”

pipeline_str = “udpsrc port=5005 caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! nvegltransform ! nveglglessink name=mysink sync=false”

pipeline = Gst.parse_launch(pipeline_str)

def on_new_sample(appsink):
sample = appsink.emit(“pull-sample”)
buf = sample.get_buffer()
caps = sample.get_caps()
height = caps.get_structure(0).get_value(‘height’)
width = caps.get_structure(0).get_value(‘width’)
img_array = buf.extract_dup(0, buf.get_size())
img = np.frombuffer(img_array, dtype=np.uint8).reshape((height, width, 4))[:, :, :3]
cv2.imshow(“Video”, img)
cv2.waitKey(1)

app_sink = pipeline.get_by_name(‘mysink’)
app_sink.set_property(“emit-signals”, True)
app_sink.set_property(“max-buffers”, 1)
app_sink.connect(“new-sample”, on_new_sample)

pipeline.set_state(Gst.State.PLAYING)

GObject.MainLoop().run()

But it seems nveglglessink object cannot get the video frame.
So the second question is : how to get the nveglglessink video frame?

Thank you very much and hope to get the reply

I change the logic to use opencv to get the frame directly.

pipeline_description = “fdsrc fd = 0 ! video/x-h265,width=1920,height=1080,framerate=30/1,streamformat=(string)byte-stream ! h265parse config-interval=1 ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink sync=false”

video = cv2.VideoCapture(pipeline_description,cv2.CAP_GSTREAMER)

After that, I can the frame correctly. But the cpu usage in top command is about 40% in nx while running the script. Any other good ways to get the frame with only gpu use only? Thanks

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

If you use opencv, the cpu will indeed increase. Could you try to use nvinfer to do the detection?
You can refer to the link below, it gets higher processing efficiency and uses GPU reasoning
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html

If you want to get the frame, you can refer to the code below. You need to implement the probe interface yourself.
https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/blob/master/apps/deepstream-imagedata-multistream/deepstream_imagedata-multistream.py#L70

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.