What is the defaulat output format of the Jetson board camera

Sorry your case is a bit unclear to me…

Xavier has no onboard camera by default. You can add either a CSI camera or a USB camera.

For example, if I use the camera module from TX1/TX2 devkit, this CSI camera module has a OV5693 sensor. This sensor sends frames in bayer format. It would be shown as /dev/video0. Details can be obtained with (you would need to have package v4l-utils installed):

v4l2-ctl -d /dev/video0 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
	Index       : 0
	Type        : Video Capture
	Pixel Format: 'BG10'
	Name        : 10-bit Bayer BGBG/GRGR
		Size: Discrete 2592x1944
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 2592x1458
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1280x720
			Interval: Discrete 0.008s (120.000 fps)
		Size: Discrete 640x480
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1920x1080
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 2592x1944
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1920x1080
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 2592x1944
			Interval: Discrete 0.033s (30.000 fps)

You can see the format available from V4L is bayer 10 bits (BG10). It means you would have to debayer frames for converting into BGR format expected by your opencv code (imshow).

gstreamer can be used for debayering 8bits bayer frames, but doesn’t support more bits.

An alternate (and much better) solution is to use a bypass and send to the ISP that will debayer frames, while adding no load to your CPUs. On L4T versions up to R28.2 on TX1/TX2, there is a gstreamer plugin nvcamerasrc that does that and would output I420 frames as output by default (some others formats such as NV12 may also be used).

In your opencv code, you use a gstreamer pipeline starting with nvcamerasrc to get I420 frames and send these into plugin
nvvidconv that will convert into BGRx and copy to CPU memory, then videoconvert plugin is used to convert into BGR format that is sent to your opencv application.

On Xavier and R31, you would use another plugin nvarguscamerasrc instead of nvcamerasrc. Its default output format would be NV12, so you would change your pipeline into:

std::string get_tegra_pipeline(int width, int height, int fps) {
    return "nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)" + std::to_string(width) + ", height=(int)" +
            std::to_string(height) + ", format=(string)<b>NV12, framerate=(fraction)" + std::to_string(fps) +
            "/1 ! nvvidconv flip-method=2 ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink";
}

If you are using another camera it may be different (see this). If it is a CSI camera and all drivers/firmware/configs are installed, you may try to see what nvarguscamerasrc outputs :

gst-launch-1.0 -v nvarguscamerasrc ! fakesink
nvbuf_utils: Could not get EGL display connection
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
/GstPipeline:pipeline0/GstNvArgusCameraSrc:nvarguscamerasrc0.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)<b>NV12, framerate=(fraction)30/1
/GstPipeline:pipeline0/GstFakeSink:fakesink0.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
...