Need to understand TX2 and gstreamer

Hey all,

I’m working on a project with the TX2 and onboard Leopard IMX377 cameras. Accuracy is crucial for us and I’d really like to be able to grab the image without any de-bayering applied to it. The code I’ve used to capture the image uses gstreamer and it returns a de-bayered, full-sized image, and we can’t find any documentation on what process was used for de-bayering, what did the conversion (we’re guessing the ISP), and what other corrections (e.g. gamma correction and sharpening) were applied to the image. The code is this:

import cv2
import matplotlib.pyplot as plt

def open_cam_onboard(width=4104, height=3046):
    gstreamer_str = ("nvcamerasrc ! "
               "video/x-raw(memory:NVMM), width=(int){w}, height=(int){h}, format=(string)NV12, framerate=(fraction)30/1 ! "
               "nvvidconv ! video/x-raw, width=(int){w}, height=(int){h}, format=(string)GRAY8 ! "
               "videoconvert ! appsink").format(w=width, h=height)
    return cv2.VideoCapture(gstreamer_str, cv2.CAP_GSTREAMER)

cap = open_cam_onboard()
if cap.isOpened():
    ret_val, display_buffer =;
    plt.imshow(display_buffer, cmap='gray')

If I were able to get the bayered grayscale image directly, that’d be my first preference. If that can’t happen, I’d at least like to find the documentation about what is going on behind the scenes and if there is any way to control it.

gst-inspect-1.0 nvcamerasrc

may give you some insights about default parameters and ways to control it.

You may also try to get frames from v4l2 interface (you may install v4l-conf, v4l-utils and qv4l2), but not sure it provides a suitable format for opencv.
AFAIK, gstreamer v4l2src plugin only supports 8 bits modes but IMX377 seems to be a 12 bits raw sensor. If it can be configured in 8 bits modes, then you may be able to use v4l2src plugin in gstreamer, but I have no experience with this sensor.

This post was for OV5693 onboard camera, but you may find some additional info:

Since I posted that comment 10 days ago, I’ve figured out how to use both gstreamer with nvcamerasrc to capture images and how to set all the available options. I’ve set all the listed options for nvcamerasrc that would affect the resulting image and still the image is far too noisy and the edges are far too artificially enhanced for us to be able to use.

I am capable of getting the raw bayer image using v4l2-ctl, but the downsides are 1) I have to stream the image to a file (though, maybe something like tempfs or named pipes can help there) and 2) I have to spend CPU cycles decoding and debayering the image instead of letting the ISP do that work for “free”. These issues may very well affect our ability to use the cameras at 30fps, particularly when we wish to start using multiple cameras at once.

FYI, gstreamer v4l2src only works for USB cameras, not CSI cameras, so that’s a non-starter for us. The link you provided offers no additional information on this topic, thanks anyhow.