Need help getting grayscale image for pixel-level work. v4l2src broken, nvcamerasrc too slow.

I’ve got the jetson TX2 with Jetpack 3.1 and the leopard imaging Triple IMX377 Camera Kit with M12 Lens https://shop.leopardimaging.com/product.sc?productId=321&categoryId=44

We are trying to get very fine accuracy at the pixel level with grayscale images, so we have a need to bypass the ISP and use the raw bayer image. I found that using nvcamerasrc can dump the bayer image to a file, but that’s not a scalable workflow for us. I’ve seen other threads that imply that v4l2src can output the bayer file to memory; however, gstreamer must be patched to use RAW10 rather than RAW8 for this to work. I went about the patching process, but I must have messed up somewhere because now whenever I try to run a gstreamer pipeline using v4l2src, I get an error like the following:

$ gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=1 ! “video/x-bayer, format=rggb, width=3280, height=2464” ! filesink location=test_3280x2464.bayer
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2948): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming task paused, reason not-negotiated (-4)
ERROR: pipeline doesn’t want to preroll.
Setting pipeline to PAUSED …
Setting pipeline to READY …
Setting pipeline to NULL …
Freeing pipeline …

Actually, scratch that. When I put in a width and height that corresponds to the sensor size, the pipeline doesn’t give me an error, but the output is all 0’s:

$ DISPLAY=:0 gst_1.14.0/out/bin/gst-launch-1.0 -v v4l2src device=/dev/video0 num-buffers=1 ! "video/x-bayer, format=rggb, width=4104, height=3046" ! filesink location=test_4104x3046.bayer
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
/GstPipeline:pipeline0/GstV4l2Src:v4l2src0.GstPad:src: caps = video/x-bayer, format=(string)rggb, width=(int)4104, height=(int)3046, framerate=(fraction)30/1, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = video/x-bayer, format=(string)rggb, width=(int)4104, height=(int)3046, framerate=(fraction)30/1, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstFileSink:filesink0.GstPad:sink: caps = video/x-bayer, format=(string)rggb, width=(int)4104, height=(int)3046, framerate=(fraction)30/1, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:sink: caps = video/x-bayer, format=(string)rggb, width=(int)4104, height=(int)3046, framerate=(fraction)30/1, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
Got EOS from element "pipeline0".
Execution ended after 0:00:03.237484397
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

I’m perfectly happy with help solving this whether or not it includes v4l2src. When I capture the image using nvcamerasrc and then use nvvidconv, no matter what settings I use, the resulting image is far too noisy to extract the kind of details we need unless I use the raw bayer file. v4l2src isn’t working as noted. I haven’t yet gone down the path of libargus (mostly because the documentation for compiling and running libargus has been very difficult to come by).

If I could fine tune the shutter, aperture, gain, white balance, edge enhancement, and noise removal that the ISP does and get full-resolution or downsampled image straight from the camera, that’d be ideal. If that can’t happen, then getting the raw bayer image into memory so I can manipulate it myself using opencv would be preferred (and if that takes fixing or reinstalling gstreamer, I’ll need some help with that).

Thanks!

Run below command before launch the v4l2src to try.

v4l2-ctl -d /dev/video0 --set-ctrl bypass_mode=0