hi every body i was trying to capture 4k image with arducam imx477
and i found something in sample code
there is two image size in gstreamer parameter capture size and display size and when i use camera.read() the returned image size is display size .
and i wondering how image size could be something like 1080x720 but capture size is 4032x3040 ,
does captured image will resized to display size???
is there any way to remove this size scaling and make process faster???
yes i’m aware of that i can change them but i don’t understand concept of capture size vs display size specially when i set capture size to 4k resolution and display size to HD resolution (1080x720) what happens to image quality when these two parameters are different ???
Obviously rescaling to a smaller resolution would loose information. How this would be done may depend on rescaling factors and interpolation-method property of nvvidconv.
The main purpose of such downscaling is reducing the pixel rate, because the conversion into BGR using videconvert is done with CPUs. With a TX1, a 4K resolution @30 fps may not work with such a high pixel rate.
Most opencv color algorithms expect BGR format, but if you’re creating your own processing, it is possible to read YUV formats, or in recent opencv versions (since 4.5.4) to read frames in BGRx or RGBA format, so you would no longer need videoconvert and may achieve reading 4k@30fps.
If you need BGR, you may try using opencv cudaimgproc for converting RGBA into BGR format with GPU , but this would imply copies to/from GPU.
Also, be aware that processing of such resolution may also be the bottleneck in main loop. You may start with low resolution and framerate and then try increasing while monitoring resources usage with tegrastats.
thanks for your response
according what you said and what i undrestand the conversion of image from RGBA to BGR in opencv is bottleneck that will prevent the process reach 30fps in 4k resolution .
is there any way to prevent these conversion ?and read images in raw format or something else ?
i want to send these images from my jetson to another pc via some protocol (like zmq or socket ) and i have two main problem
i cant reach 30fps in 4k resolution (even without encoding images)
i get maximum 13 fps (in 2464x3280 resolution ) without imshow command .
encoding data to send with zmq (python api) make process slow and that will effect on fps and decrease fps to 5 or 6 frame per sencend
As said above, if using an opencv version from 4.5.4, you may capture in RGBA so you won’t need videoconvert.
imshow() should work. However opencv VideoWriter with gstreamer backend would only support 1 or 3 channels (as using opencv 4.6.0), so encoding into H264 may not be available without format conversion.
Format conversion with opencv may also be possible, you may try and measure.
Not sure if using opencv is the best choice for your case. Better tell your case and what you want to achieve for better advice.
i used rtp and send camera feed from jetson to another linux system and its worked very well if send data from terminal and recieve in terminal
but when i want to send from terminal and recieve in opencv i get confiused of using elements
this code is part of the main code of recieving data in opencv but there is no data to catch .
this is the commad i used for sending data in jetson nano
gst-launch-1.0 nvarguscamerasrc ! ‘video/x-raw(memory:NVMM), width=1920, height=1080, framerate=30/1’ ! omxh264enc control-rate=2 bitrate=4000000 ! video/x-h264, stream-format=byte-stream ! rtph264pay mtu=1400 ! udpsink host=$CLIENT_IP port=5000 sync=false async=false
and its work very well because i can recieve data with this command in terminal