I want to resize image array with opencv + gstreamer using hardware scaling of jetson nano, Is it possible? I want to run with python.
As resizing image array with opencv + CPU is slowe, I want resize the image array in memory with hardware scaling of jetson nano using opencv + gstreamer in python.
I want to feed image array to opencv + gstreamer and resize the image and then get output as image array.
You can use nvvidconv. The following python code resizes 1920x1080 to 640x480:
import sys import cv2 def read_cam(): cap = cv2.VideoCapture("nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080,format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw, width=(int)640, height=(int)480, format=(string)BGRx ! videoconvert ! appsink") if cap.isOpened(): cv2.namedWindow("demo", cv2.WINDOW_AUTOSIZE) while True: ret_val, img = cap.read(); cv2.imshow('demo',img) cv2.waitKey(10) else: print "camera open failed" cv2.destroyAllWindows() if __name__ == '__main__': read_cam()
I think this way doesn’t solve my problem. My goal is that: I do multi-streaming decoding with gstreamer + opencv using hardware decoder of jetson nano in 1920x1080. and I do this work like your codes but for rtsp stream. so I get 1080p frames of streams in ram as image array, then I need to resize these streaming to 500x500, and I can use opencv resizer method but this fuction is slow in the multi-streaming, resizing of batch of frames at the same time increase the cpu usage. I don’t want to resize with opencv resizer, I want when I captures frames from decoder and loaded into ram as image array then back to hardware jetson nano for resizing process. because I want to have both 1920x1080 and 500x500 resolutions.
Please share what you’ve tried for this conclusion.
You might also try this for checking:
gst-launch-1.0 videotestsrc ! video/x-raw, width=1920, height=1080 ! nvvidconv ! 'video/x-raw(memory:NVMM), width=500, height=500' ! nvvidconv ! video/x-raw ! videoconvert ! xvimagesink
I used gstreamer + opencv using hardware decoder like this:
gstream_elemets = ( 'rtspsrc location=rtsp latency=300 !' 'rtph264depay ! h264parse !' 'omxh264dec!' 'video/x-raw(memory:NVMM),format=(string)NV12 !' 'nvvidconv ! video/x-raw , format=(string)BGRx !' 'videoconvert ! video/x-raw , format=(string)BGR ! ' 'appsink'). cv2.VideoCapture(gstream_elemets, cv2.CAP_GSTREAMER)
then I capture the frames like this:
frame = cap.read()
this frame has 1920x1080 resulotion, I want to resize the image to 500x500 and one solution is :
frame_resized = cv2.resize(frame, (500,500))
but this solution is not good in the edge device beccause this function is slower.
I want to resize the frame with hardware jetson nano, not with cpu.
Because I need to both resize 500x500 and 1920x1080, I have to have both.
The conversion for your case with opencv on CPU should take about 2 ms. Can’t you afford this ?
Be aware that imshow/waitKey in the loop may not be very efficient on Jetson.
You’d better use a cv videoWriter with a gstreamer pipeline to fpsdisplaysink with video-sink=xvimagesink.
If you really want to use HW conversion, you may use 2 virtual nodes with v4l2loopback, and use tee in gstreamer pipeline after h264 decoding into NVMM memory, then use one branch for decoding with nvvidconv into 1080p BGRx -> videoconvert-> RGB and sending to first v4l2loopback device, than second branch to resize with nvvidconv into 500x500 BGRx -> videoconvert -> RGB and send to second v4l2sink.
You would then open 2 videoCapture from Opencv with V4l2 API giving their dev nodes numbers to read both high res and low res frames.
Not sure how much these 2 virtual sources would be synchronized, nor if the CPU overhead wouldn’t be worse.
Thanks so much,
Is it possible please modify the code as you explained. I’m new in this work. I don’t know what do I do? Thanks.