Orin

I’m running on a Jetson Orin (not a Nano and not a Super). Running JetPack R63.

I’m trying to capture an image using only v4l2 utilities. But I’m unable to view the image correctly. The image looks fine if I use the nvarguscamerasrc and convert to RGB.

Here are the formats for the camera:

$ v4l2-ctl --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
	Type: Video Capture

	[0]: 'RG10' (10-bit Bayer RGRG/GBGB)
		Size: Discrete 3840x2160
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1920x1080
			Interval: Discrete 0.017s (60.000 fps)

I’m using v4l2-ctl to capture the frame. I’m skipping the first 10 seconds of data (300 frames) and outputting the following frame to disk.

 $ v4l2-ctl -d /dev/video0 --set-fmt-video=width=3840,height=2160,pixelformat=RG10 --stream-mmap --stream-to ctl-frame.bin --stream-count=1 --stream-skip=300

If I then try to render that image using the following in python, I get an image that is very gray and dull, where bright spots are purple.

import numpy as np
import cv2
from pathlib import Path

def main():
    file_path = Path("~/data/inspect_raw_frame/frame.bin")
    num_frames = 1
    with file_path.expanduser().open("rb") as f:
        data = np.fromfile(f, dtype=np.uint16).reshape((num_frames, 2160, 3840))
        data = data[-1,::-1,:]
        # Remove the 4 duplicated MSBs 
        data = data >> 4
        # Truncate the lower 2 LSBs to convert from UInt10 to UInt8
        data = data >> 2
        data = data.astype(np.uint8)

    cdata = cv2.cvtColor(data, cv2.COLOR_BayerBGGR2BGR)
    cdata = cv2.resize(cdata, (1920, 1080))
    cv2.imshow("image", cdata)
    cv2.waitKey(0)

    cv2.destroyAllWindows()

if __name__ == "__main__":
    main()

I based the Python code off of this post. However, something is still not quite right.

Is that post accurate?

I attempted to validate empirically. The 2 MSBs are indeed repeated in the first 2 bits for this frame. However, the upper most bits of the 16 bits are not zero which seems odd.

I checked this with the following code:

import numpy as np
from pathlib import Path

def main():
    file_path = Path("~/data/inspect_raw_frame/frame.bin")
    num_frames = 1
    with file_path.expanduser().open("rb") as f:
        data = np.fromfile(f, dtype=np.uint16).reshape((num_frames, 2160, 3840))

    num_equal = 0
    num_zeroed = 0
    for value in data.flatten().tolist():
        binary = f"{value:016b}"
        if binary[:2] == "00":
            num_zeroed += 1
        if binary[2:6] == binary[-4:]:
            num_equal += 1
    print(np.size(data), num_equal, num_zeroed)

if __name__ == "__main__":
    main()

And here’s the output from running it:

8294400 8294400 8266286

Here’s a binary dump showing that indeed the highest 2 bits are not zeroed out.

$ xxd -b -l 0x1ff1e8 frame.bin
001ff1e2: 10010011 01001110 01010001 01000100 11010110 01011010  .NQD.Z
                    ^                 ^                 ^

Am I misunderstanding the RG10 format? Any idea why my image does not look correct?

hello nick.j.meyer,

it’s raw content if you dump a frame via v4l2 IOCTL directly.
it’s normal to have dark capture results since you did not call sensor operations to toggle the gain/exposure settings.
for instance, you may try adding --set-ctrl=gain=200 --set-ctrl=exposure=2800 to the v4l pipeline for testing.

BTW,
you may try using 3rdparty utility, such as 7yuv, which able view the raw content (and, toggle some view settings) directly.

Thanks for the response!

I tried to add those params, but the image still looks dark with a strong purple hue. So it’s not just the darkness, the color tone is very wrong.

Can you provide details of the image format I can expect? The post I referenced above doesn’t seem right as the upper most 2 bits are non all zero.

I also just confirmed that what I see is very similar to what 7yuv sees. Everything is purple.

Can you detail the pixel layout I should expect for RG10?

hello nick.j.meyer,

according to below..

may I know what’s your capture pipeline with nvarguscamerasrc plugin?

besides,
please refer to Applications Using V4L2 IOCTL Directly to dump the file as *.raw, you may also attach it here for cross checking.

I’ve attached some example data.

Here is a command I ran using the nvarguscamerasrc to write jpeg frames to disk. Attached is 00201.jpg which is one of those sample frames.

gst-launch-1.0 nvarguscamerasrc name=mysource do-timestamp=true ! nvvidconv flip-method=2 !  "video/x-raw(memory:NVMM),width=1920,height=1080,framerate=30/1,format=NV12" ! nvvidconv ! jpegenc ! multifilesink location=%05d.jpg

Here is a command I ran using the v4l2-ctl to capture a raw frame using the gain and exposure you recommended above. Attached is ctl-frame.bin which is the output of this command. I have attached a screenshot ctl-frame.png of what ctl-frame.bin looks like using both 7yuv and the python script I posted above.

v4l2-ctl -d /dev/video0 --set-fmt-video=width=3840,height=2160,pixelformat=RG10 --stream-mmap --stream-to ctl-frame.bin --stream-count=1 --stream-skip=200 --set-ctrl=gain=200 --set-ctrl=exposure=2800

I’m skipping the first 200 frames in both cases to be consistent.

All three files are within the attached tar ball.

Are you able to correctly convert the raw frame?

files_for_forum.tar.gz (6.4 MB)

hello nick.j.meyer,

FYI, the data is in MSBs and LSBs are replicated from MSBs.
please refer to Orin TRM for [2.4.5 RAW Memory Formats] section.
for instance,

Thanks for the diagram. It looks like that shows that the 6 MSBs are duplicated. While this post (that I also linked in the original post) only has 4 MSBs repeated.

I tried decoding both ways. Either way I still get an image that is very color inaccurate compared to the nvarguscamearsrc. So it seems something still isn’t quite right.

For the images I sent as an attachment in my last post, were you able to convert the raw frame to match the nvarguscamerasrc? If so, can you share some sample code for that color conversion?

How does nvarguscamerasrc convert to RGB? Can you share some sample code for how to correctly convert the raw frame?

hello nick.j.meyer,

it looks correct to me.
for instance,

hello nick.j.meyer,

please refer to MMAPI example, 12_v4l2_camera_cuda.
or… Argus/samples/cudaBayerDemosaic to process the raw capture buffer.

you may running with $ sudo apt install nvidia-l4t-jetson-multimedia-api to install the MMAP package.

Your image above also appears to be incorrect. Why are any light areas so green? This looks nothing like the image from nvarguscamerasrc which is below.

Can you share appropriate camera settings and image format information in order to reproduce the image from nvarguscamerasrc?

Hi Jerry,

It looks like that tool doesn’t support RG10. When I try to run, I get the following output and an all black screen. I cycled through all supported arguments for -f but they all produced the same result.

culinairy@culinairy-jonano:/usr/src/jetson_multimedia_api/samples/12_v4l2_camera_cuda$ ./v4l2_camera_cuda -d /dev/video0 -s 3840x2160 -f YUYV -n 30
[INFO] (NvEglRenderer.cpp:110) <renderer0> Setting Screen width 3840 height 2160
WARN: request_camera_buff(): (line:359) Camera v4l2 buf length is not expected
WARN: request_camera_buff(): (line:359) Camera v4l2 buf length is not expected
WARN: request_camera_buff(): (line:359) Camera v4l2 buf length is not expected
WARN: request_camera_buff(): (line:359) Camera v4l2 buf length is not expected
nvbufsurface: Failed to create EGLImage.
[ERROR] (NvEglRenderer.cpp:386) <renderer0> Could not get EglImage from fd. Not rendering
nvbufsurface: Failed to create EGLImage.
[ERROR] (NvEglRenderer.cpp:386) <renderer0> Could not get EglImage from fd. Not rendering
nvbufsurface: Failed to create EGLImage.
... 
<this repeats until I kill the app>

Can you provide another example that can support RG10?

Hi Jerry,

I tried the sample tool 18_v4l2_camera_cuda_rgb and I was able to produce images. However, they still look incorrect. I tried every single format possible that the tool supported. The best image I could produce is very green. Here is an example.

Can you advise on how to fix the sample code to produce a correct image? Or is there another sample that supports RG10?

Here is the code I used to test the tool:

#!/bin/bash

formats=("RGB332" "RGB555" "RGB565" "RGB555X" "RGB565X" "BGR24" "RGB24" "BGR32" "RGB32" "Y8" "Y10" "Y12" "Y16" "UYVY" "VYUY" "YUYV" "YVYU" "NV12" "NV21" "NV16" "NV61" "NV24" "NV42" "SBGGR8" "SGBRG8" "SGRBG8" "SRGGB8" "SBGGR10_DPCM8" "SGBRG10_DPCM8" "SGRBG10_DPCM8" "SRGGB10_DPCM8" "SBGGR10" "SGBRG10" "SGRBG10" "SRGGB10" "SBGGR12" "SGBRG12" "SGRBG12" "SRGGB12" "DV" "MJPEG" "MPEG")


for f in ${formats[@]}
do
    sudo ./v4l2_camera_cuda_rgb -f $f --size 3840x2160 --output ./output/file_$f.ppm > /dev/null
done

hello nick.j.meyer,

it’s normal, it’s simply calculate MSBs replicated to convert the raw.
there’s no post-process, (such as AWB) to adjust the image quality.


anyways, may I double check your real use-case.
why not using nvarguscamerasrc if you’re looking for tuned images.