Hi
I’m trying to optimize performance in a media pipeline of mine. Using 2x IMX334 sensors which natively output 3864x2180 frames. Ideally I want to use 3840x2160 in my Gstreamer pipeline, because that’s what the datasheet of the sensor recommends.
The driver I’m using gets the full frame from the sensor, which I believe is the right thing to do, for color tuning at the ISP stage, however for the output frame, I’m supposed to crop 12px away from left/right and 12px/8px from top/bottom, according to the datasheet.
I know I can do that with nvvidconv and friends, but that puts additional load on the VIC, which I want to avoid.
I’ve tried:
- modifying the V4L
camera_common_frmfmt
definition to reflect the 3840x2160 format - modifying the
active_w
andactive_h
properties in the mode definitions in the device tree - a combination thereof
That all results in frames the right size, but with visible artifacts (no wonder!)
The documentation mentions device tree properties named such as num_of_left_margin_pixels
which sounds exactly like what I need! Defining the number of pixels to disregard on either edge of the frame, however that seems to be only related for DOL captures, which I’m not using.
Would I be able to use the DOL things, even though I’m not doing HDR captures at present?