Orin

I’m running on a Jetson Orin (not a Nano and not a Super). Running JetPack R63.

I’m trying to capture an image using only v4l2 utilities. But I’m unable to view the image correctly. The image looks fine if I use the nvarguscamerasrc and convert to RGB.

Here are the formats for the camera:

$ v4l2-ctl --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
	Type: Video Capture

	[0]: 'RG10' (10-bit Bayer RGRG/GBGB)
		Size: Discrete 3840x2160
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1920x1080
			Interval: Discrete 0.017s (60.000 fps)

I’m using v4l2-ctl to capture the frame. I’m skipping the first 10 seconds of data (300 frames) and outputting the following frame to disk.

 $ v4l2-ctl -d /dev/video0 --set-fmt-video=width=3840,height=2160,pixelformat=RG10 --stream-mmap --stream-to ctl-frame.bin --stream-count=1 --stream-skip=300

If I then try to render that image using the following in python, I get an image that is very gray and dull, where bright spots are purple.

import numpy as np
import cv2
from pathlib import Path

def main():
    file_path = Path("~/data/inspect_raw_frame/frame.bin")
    num_frames = 1
    with file_path.expanduser().open("rb") as f:
        data = np.fromfile(f, dtype=np.uint16).reshape((num_frames, 2160, 3840))
        data = data[-1,::-1,:]
        # Remove the 4 duplicated MSBs 
        data = data >> 4
        # Truncate the lower 2 LSBs to convert from UInt10 to UInt8
        data = data >> 2
        data = data.astype(np.uint8)

    cdata = cv2.cvtColor(data, cv2.COLOR_BayerBGGR2BGR)
    cdata = cv2.resize(cdata, (1920, 1080))
    cv2.imshow("image", cdata)
    cv2.waitKey(0)

    cv2.destroyAllWindows()

if __name__ == "__main__":
    main()

I based the Python code off of this post. However, something is still not quite right.

Is that post accurate?

I attempted to validate empirically. The 2 MSBs are indeed repeated in the first 2 bits for this frame. However, the upper most bits of the 16 bits are not zero which seems odd.

I checked this with the following code:

import numpy as np
from pathlib import Path

def main():
    file_path = Path("~/data/inspect_raw_frame/frame.bin")
    num_frames = 1
    with file_path.expanduser().open("rb") as f:
        data = np.fromfile(f, dtype=np.uint16).reshape((num_frames, 2160, 3840))

    num_equal = 0
    num_zeroed = 0
    for value in data.flatten().tolist():
        binary = f"{value:016b}"
        if binary[:2] == "00":
            num_zeroed += 1
        if binary[2:6] == binary[-4:]:
            num_equal += 1
    print(np.size(data), num_equal, num_zeroed)

if __name__ == "__main__":
    main()

And here’s the output from running it:

8294400 8294400 8266286

Here’s a binary dump showing that indeed the highest 2 bits are not zeroed out.

$ xxd -b -l 0x1ff1e8 frame.bin
001ff1e2: 10010011 01001110 01010001 01000100 11010110 01011010  .NQD.Z
                    ^                 ^                 ^

Am I misunderstanding the RG10 format? Any idea why my image does not look correct?

hello nick.j.meyer,

it’s raw content if you dump a frame via v4l2 IOCTL directly.
it’s normal to have dark capture results since you did not call sensor operations to toggle the gain/exposure settings.
for instance, you may try adding --set-ctrl=gain=200 --set-ctrl=exposure=2800 to the v4l pipeline for testing.

BTW,
you may try using 3rdparty utility, such as 7yuv, which able view the raw content (and, toggle some view settings) directly.

Thanks for the response!

I tried to add those params, but the image still looks dark with a strong purple hue. So it’s not just the darkness, the color tone is very wrong.

Can you provide details of the image format I can expect? The post I referenced above doesn’t seem right as the upper most 2 bits are non all zero.

I also just confirmed that what I see is very similar to what 7yuv sees. Everything is purple.

Can you detail the pixel layout I should expect for RG10?

hello nick.j.meyer,

according to below..

may I know what’s your capture pipeline with nvarguscamerasrc plugin?

besides,
please refer to Applications Using V4L2 IOCTL Directly to dump the file as *.raw, you may also attach it here for cross checking.

I’ve attached some example data.

Here is a command I ran using the nvarguscamerasrc to write jpeg frames to disk. Attached is 00201.jpg which is one of those sample frames.

gst-launch-1.0 nvarguscamerasrc name=mysource do-timestamp=true ! nvvidconv flip-method=2 !  "video/x-raw(memory:NVMM),width=1920,height=1080,framerate=30/1,format=NV12" ! nvvidconv ! jpegenc ! multifilesink location=%05d.jpg

Here is a command I ran using the v4l2-ctl to capture a raw frame using the gain and exposure you recommended above. Attached is ctl-frame.bin which is the output of this command. I have attached a screenshot ctl-frame.png of what ctl-frame.bin looks like using both 7yuv and the python script I posted above.

v4l2-ctl -d /dev/video0 --set-fmt-video=width=3840,height=2160,pixelformat=RG10 --stream-mmap --stream-to ctl-frame.bin --stream-count=1 --stream-skip=200 --set-ctrl=gain=200 --set-ctrl=exposure=2800

I’m skipping the first 200 frames in both cases to be consistent.

All three files are within the attached tar ball.

Are you able to correctly convert the raw frame?

files_for_forum.tar.gz (6.4 MB)

hello nick.j.meyer,

FYI, the data is in MSBs and LSBs are replicated from MSBs.
please refer to Orin TRM for [2.4.5 RAW Memory Formats] section.
for instance,

Thanks for the diagram. It looks like that shows that the 6 MSBs are duplicated. While this post (that I also linked in the original post) only has 4 MSBs repeated.

I tried decoding both ways. Either way I still get an image that is very color inaccurate compared to the nvarguscamearsrc. So it seems something still isn’t quite right.

For the images I sent as an attachment in my last post, were you able to convert the raw frame to match the nvarguscamerasrc? If so, can you share some sample code for that color conversion?

How does nvarguscamerasrc convert to RGB? Can you share some sample code for how to correctly convert the raw frame?

hello nick.j.meyer,

it looks correct to me.
for instance,

hello nick.j.meyer,

please refer to MMAPI example, 12_v4l2_camera_cuda.
or… Argus/samples/cudaBayerDemosaic to process the raw capture buffer.

you may running with $ sudo apt install nvidia-l4t-jetson-multimedia-api to install the MMAP package.

Your image above also appears to be incorrect. Why are any light areas so green? This looks nothing like the image from nvarguscamerasrc which is below.

Can you share appropriate camera settings and image format information in order to reproduce the image from nvarguscamerasrc?

Hi Jerry,

It looks like that tool doesn’t support RG10. When I try to run, I get the following output and an all black screen. I cycled through all supported arguments for -f but they all produced the same result.

culinairy@culinairy-jonano:/usr/src/jetson_multimedia_api/samples/12_v4l2_camera_cuda$ ./v4l2_camera_cuda -d /dev/video0 -s 3840x2160 -f YUYV -n 30
[INFO] (NvEglRenderer.cpp:110) <renderer0> Setting Screen width 3840 height 2160
WARN: request_camera_buff(): (line:359) Camera v4l2 buf length is not expected
WARN: request_camera_buff(): (line:359) Camera v4l2 buf length is not expected
WARN: request_camera_buff(): (line:359) Camera v4l2 buf length is not expected
WARN: request_camera_buff(): (line:359) Camera v4l2 buf length is not expected
nvbufsurface: Failed to create EGLImage.
[ERROR] (NvEglRenderer.cpp:386) <renderer0> Could not get EglImage from fd. Not rendering
nvbufsurface: Failed to create EGLImage.
[ERROR] (NvEglRenderer.cpp:386) <renderer0> Could not get EglImage from fd. Not rendering
nvbufsurface: Failed to create EGLImage.
... 
<this repeats until I kill the app>

Can you provide another example that can support RG10?

Hi Jerry,

I tried the sample tool 18_v4l2_camera_cuda_rgb and I was able to produce images. However, they still look incorrect. I tried every single format possible that the tool supported. The best image I could produce is very green. Here is an example.

Can you advise on how to fix the sample code to produce a correct image? Or is there another sample that supports RG10?

Here is the code I used to test the tool:

#!/bin/bash

formats=("RGB332" "RGB555" "RGB565" "RGB555X" "RGB565X" "BGR24" "RGB24" "BGR32" "RGB32" "Y8" "Y10" "Y12" "Y16" "UYVY" "VYUY" "YUYV" "YVYU" "NV12" "NV21" "NV16" "NV61" "NV24" "NV42" "SBGGR8" "SGBRG8" "SGRBG8" "SRGGB8" "SBGGR10_DPCM8" "SGBRG10_DPCM8" "SGRBG10_DPCM8" "SRGGB10_DPCM8" "SBGGR10" "SGBRG10" "SGRBG10" "SRGGB10" "SBGGR12" "SGBRG12" "SGRBG12" "SRGGB12" "DV" "MJPEG" "MPEG")


for f in ${formats[@]}
do
    sudo ./v4l2_camera_cuda_rgb -f $f --size 3840x2160 --output ./output/file_$f.ppm > /dev/null
done

hello nick.j.meyer,

it’s normal, it’s simply calculate MSBs replicated to convert the raw.
there’s no post-process, (such as AWB) to adjust the image quality.


anyways, may I double check your real use-case.
why not using nvarguscamerasrc if you’re looking for tuned images.

Hi Jerry,

I wish I could just use nvarguscamerasrc, however it crashes in long running programs. I had posted another thread about how to fix that, but none of the patches helped. There still seem to be some outstanding bugs.

Can you provide some details on how nvarguscamerasrc sets gain and exposure? I tried doing an experiment to see if varying those parameters helped, but all the images looked the same. So it seems I’m doing something wrong.

#!/bin/bash

set -e

for gain in {16..367..36}
do
    for exposure in {13..683710..68371}
    do
        filename=ctl-frame-${gain}-${exposure}.bin
        echo ">>> ${filename}"
        v4l2-ctl -d /dev/video0 --set-fmt-video=width=3840,height=2160,pixelformat=RG10 --stream-mmap --stream-to ${filename} --stream-count=1 --stream-skip=200 --set-ctrl=gain=${gain} --set-ctrl=exposure=${exposure} --set-ctrl=bypass_mode=0
    done
done

Any advice would be greatly appreciated!

I noticed that I can run v4l2-ctl -l -d /dev/video0 with nvarguscamearsrc running to see the values set.

Here are the values I see:

$ v4l2-ctl -l -d /dev/video0

Camera Controls

                     group_hold 0x009a2003 (bool)   : default=0 value=0 flags=execute-on-write
                    sensor_mode 0x009a2008 (int64)  : min=0 max=2 step=1 default=0 value=1 flags=slider
                           gain 0x009a2009 (int64)  : min=16 max=357 step=1 default=16 value=355 flags=slider
                       exposure 0x009a200a (int64)  : min=13 max=683710 step=1 default=2495 value=29999 flags=slider
                     frame_rate 0x009a200b (int64)  : min=2000000 max=60000000 step=1 default=60000000 value=30000001 flags=slider
           sensor_configuration 0x009a2032 (u32)    : min=0 max=4294967295 step=1 default=0 dims=[22] flags=read-only, volatile, has-payload
         sensor_mode_i2c_packet 0x009a2033 (u32)    : min=0 max=4294967295 step=1 default=0 dims=[1026] flags=read-only, volatile, has-payload
      sensor_control_i2c_packet 0x009a2034 (u32)    : min=0 max=4294967295 step=1 default=0 dims=[1026] flags=read-only, volatile, has-payload
                    bypass_mode 0x009a2064 (intmenu): min=0 max=1 default=0 value=1 (1 0x1)
                override_enable 0x009a2065 (intmenu): min=0 max=1 default=0 value=1 (1 0x1)
                   height_align 0x009a2066 (int)    : min=1 max=16 step=1 default=1 value=1
                     size_align 0x009a2067 (intmenu): min=0 max=2 default=0 value=0 (1 0x1)
               write_isp_format 0x009a2068 (int)    : min=1 max=1 step=1 default=1 value=1
       sensor_signal_properties 0x009a2069 (u32)    : min=0 max=4294967295 step=1 default=0 dims=[30][18] flags=read-only, has-payload
        sensor_image_properties 0x009a206a (u32)    : min=0 max=4294967295 step=1 default=0 dims=[30][16] flags=read-only, has-payload
      sensor_control_properties 0x009a206b (u32)    : min=0 max=4294967295 step=1 default=0 dims=[30][36] flags=read-only, has-payload
              sensor_dv_timings 0x009a206c (u32)    : min=0 max=4294967295 step=1 default=0 dims=[30][16] flags=read-only, has-payload
               low_latency_mode 0x009a206d (bool)   : default=0 value=0
               preferred_stride 0x009a206e (int)    : min=0 max=65535 step=1 default=0 value=0
                   sensor_modes 0x009a2082 (int)    : min=0 max=30 step=1 default=30 value=2 flags=read-only

Here is the command I tried to run:

$ v4l2-ctl -d /dev/video0 --set-fmt-video=width=3840,height=2160,pixelformat=RG10 --stream-mmap --stream-to ctl-frame.bin --stream-count=1 --stream-skip=200 --set-ctrl sensor_mode=1 --set-ctrl gain=355 --set-ctrl exposure=29999 --set-ctrl frame_rate=30000001 --set-ctrl bypass_mode=1 --set-ctrl override_enable=1

However, when I try to run with those settings, it won’t stream and just hangs forever. I think this is related to bypass_enable and override_enable. Do you know how to fix this?

hello nick.j.meyer,

may I know what’s your complete test pipeline?
please refer to Camera Architecture Stack. you cannot use two different application to access to the same video node.

you may running $ gst-inspect-1.0 nvarguscamerasrc to check element properties.
for instance,

  exposuretimerange   : Property to adjust exposure time range in nanoseconds
                        Use string with values of Exposure Time Range (low, high)
                        in that order, to set the property.
                        eg: exposuretimerange="34000 358733000"
                        flags: readable, writable
                        String. Default: null
  gainrange           : Property to adjust gain range
                        Use string with values of Gain Time Range (low, high)
                        in that order, to set the property.
                        eg: gainrange="1 16"

for example,
$ gst-launch-1.0 nvarguscamerasrc exposuretimerange="74000 76000" ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1' ! nvvidconv ! xvimagesink -e

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.