Camera capture resolution and display resolution jet

hi every body i was trying to capture 4k image with arducam imx477
and i found something in sample code
there is two image size in gstreamer parameter capture size and display size and when i use camera.read() the returned image size is display size .
and i wondering how image size could be something like 1080x720 but capture size is 4032x3040 ,
does captured image will resized to display size???
is there any way to remove this size scaling and make process faster???

sample code :

def gstreamer_pipeline(
capture_width=4032,
capture_height=3040,
display_width=640,
display_height=360,
framerate=30,
flip_method=0,
):
return (
"nvarguscamerasrc ! "
"video/x-raw(memory:NVMM), "
"width=(int)%d, height=(int)%d, "
"format=(string)NV12, framerate=(fraction)%d/1 ! "
"nvvidconv flip-method=%d ! "
"video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! "
"videoconvert ! "
“video/x-raw, format=(string)BGR ! appsink”
% (
capture_width,
capture_height,
framerate,
flip_method,
display_width,
display_height,
)
)

You can modify the capture_width/capture_width depen on the sensor mode to try.
You can get the sensor mode by v4l2-ctl --list-formats-ext.

yes i’m aware of that i can change them but i don’t understand concept of capture size vs display size specially when i set capture size to 4k resolution and display size to HD resolution (1080x720) what happens to image quality when these two parameters are different ???

Suppose it’s could have any image quality impact by configure the size.

@abolfazl_asari

Obviously rescaling to a smaller resolution would loose information. How this would be done may depend on rescaling factors and interpolation-method property of nvvidconv.

The main purpose of such downscaling is reducing the pixel rate, because the conversion into BGR using videconvert is done with CPUs. With a TX1, a 4K resolution @30 fps may not work with such a high pixel rate.

Most opencv color algorithms expect BGR format, but if you’re creating your own processing, it is possible to read YUV formats, or in recent opencv versions (since 4.5.4) to read frames in BGRx or RGBA format, so you would no longer need videoconvert and may achieve reading 4k@30fps.

If you need BGR, you may try using opencv cudaimgproc for converting RGBA into BGR format with GPU , but this would imply copies to/from GPU.

Also, be aware that processing of such resolution may also be the bottleneck in main loop. You may start with low resolution and framerate and then try increasing while monitoring resources usage with tegrastats.

thanks for your response
according what you said and what i undrestand the conversion of image from RGBA to BGR in opencv is bottleneck that will prevent the process reach 30fps in 4k resolution .
is there any way to prevent these conversion ?and read images in raw format or something else ?
i want to send these images from my jetson to another pc via some protocol (like zmq or socket ) and i have two main problem

  1. i cant reach 30fps in 4k resolution (even without encoding images)
    i get maximum 13 fps (in 2464x3280 resolution ) without imshow command .
  2. encoding data to send with zmq (python api) make process slow and that will effect on fps and decrease fps to 5 or 6 frame per sencend

Did you try gstreamer RTSP?

As said above, if using an opencv version from 4.5.4, you may capture in RGBA so you won’t need videoconvert.
imshow() should work. However opencv VideoWriter with gstreamer backend would only support 1 or 3 channels (as using opencv 4.6.0), so encoding into H264 may not be available without format conversion.
Format conversion with opencv may also be possible, you may try and measure.
Not sure if using opencv is the best choice for your case. Better tell your case and what you want to achieve for better advice.

i used rtp and send camera feed from jetson to another linux system and its worked very well if send data from terminal and recieve in terminal
but when i want to send from terminal and recieve in opencv i get confiused of using elements
this code is part of the main code of recieving data in opencv but there is no data to catch .

gst_line2= ‘udpsrc host=10.42.0.1 port=5000 caps = “application/x-rtp, media=(string)video, clock-rate=(int)90000,encoding-name=(string)H264,payload=(int)96” ! rtph264depay ! decodebin ! videoconvert ! appsink’
video_capture = cv2.VideoCapture(gst_line2, cv2.CAP_GSTREAMER)

this is the commad i used for sending data in jetson nano

gst-launch-1.0 nvarguscamerasrc ! ‘video/x-raw(memory:NVMM), width=1920, height=1080, framerate=30/1’ ! omxh264enc control-rate=2 bitrate=4000000 ! video/x-h264, stream-format=byte-stream ! rtph264pay mtu=1400 ! udpsink host=$CLIENT_IP port=5000 sync=false async=false
and its work very well because i can recieve data with this command in terminal

gst-launch-1.0 udpsrc port=5000 ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! h264parse ! queue ! avdec_h264 ! xvimagesink sync=false async=false -e

how can i do this to make work ?

thank you .

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.