Hi, I plan to make a objection detection application with jetson orin nano 8gb with rtsp camera. Currently I haven’t got a rtsp camera so I try with a h.264 mp4 video. I try to get frames for dl and machine vision analysis, so I haven’t try DeepStream.
My pipline looks like:
gst-launch-1.0 filesrc location=“/home/mic-711on/trim2.mp4” ! qtdemux ! queue ! h264parse ! nvv4l2decoder ! nvvidconv ! ‘video/x-raw(memory:NVMM), format=(string)BGRx’ ! nvvidconv ! ‘video/x-raw’ !
videoconvert! ‘video/x-raw, format=(string)BGR’ ! appsink emit-signals=True
Here is the Callback func definition:
def gst_to_opencv(sample):
buf = sample.get_buffer()
caps = sample.get_caps()
print(caps.get_structure(0).get_value('format'))
print(caps.get_structure(0).get_value('height'))
print(caps.get_structure(0).get_value('width'))
print(buf.get_size())
arr = numpy.ndarray(
(caps.get_structure(0).get_value('height'),
caps.get_structure(0).get_value('width'),
3),
buffer=buf.extract_dup(0, buf.get_size()),
dtype=numpy.uint8)
return arr
def new_buffer(sink, data):
global image_arr
sample = sink.emit("pull-sample")
# buf = sample.get_buffer()
# print "Timestamp: ", buf.pts
arr = gst_to_opencv(sample)
image_arr = arr
return Gst.FlowReturn.OK
Now the cv2.imshow run properly, and then I would try to run a yolov5-based application. This pipeline works, but it looks ugly and I assume that it would waste a lot for using nvvidconv twice and the videoconvert. May I beg a solution of efficiency?