Sample application modification point to use YUYV custom camera

Hello

I attached our custom camera to jetson tx2 board.
It’s output is YUYV.
When I used default OOB camera(ov5693) of jetson tx2 board, every sample was working fine.(tegra_multimedia_api/samples)
But with our custom camera, every sample apps are not working.
We could check our camera is work using below command.
capture : v4l2-ctl --set-fmt-video=width=1824,height=940,pixelformat=YUYV --set-ctrl bypass_mode=0 --stream-mmap --stream-count=100 -d /dev/video0 --stream-to=ov491.raw

play: mplayer ov491.raw -demuxer rawvideo -rawvideo w=1824:h=940:fps=30:format=uyvy

Actually, we need more functions like video preview, color space conversion,…
So want to use sample applications of tegra_multimedia_api/samples

I have updated question in below link with error log when excute sample application with our camera.
https://devtalk.nvidia.com/default/topic/1037151/where-to-get-source-/#5269060

Please check output of v4l2-ctrl
nvidia@:$ v4l2-ctl -d /dev/video0 --all
Driver Info (not using libv4l2):
Driver name : tegra-video
Card type : vi-output, ov491 2-0024
Bus info : platform:15700000.vi:0
Driver version: 4.4.38
Capabilities : 0x84200001
Video Capture
Streaming
Extended Pix Format
Device Capabilities
Device Caps : 0x04200001
Video Capture
Streaming
Extended Pix Format
Priority: 2
Video input : 0 (Camera 0: no power)
Format Video Capture:
Width/Height : 1824/940
Pixel Format : ‘YUYV’
Field : None
Bytes per Line : 3648
Size Image : 3429120
Colorspace : sRGB
Transfer Function : Default
YCbCr Encoding : Default
Quantization : Default
Flags :

Camera Controls

                 hdr_enable (intmenu): min=0 max=1 default=0 value=0
                sensor_mode (int64)  : min=0 max=0 step=0 default=0 value=254 flags=slider
                       gain (int64)  : min=0 max=0 step=0 default=0 value=0 flags=slider
                   exposure (int64)  : min=0 max=0 step=0 default=0 value=125 flags=slider
                 frame_rate (int64)  : min=0 max=0 step=0 default=0 value=125829120 flags=slider
                bypass_mode (intmenu): min=0 max=1 default=0 value=0
            override_enable (intmenu): min=0 max=1 default=0 value=0
               height_align (int)    : min=1 max=16 step=1 default=1 value=1
                 size_align (intmenu): min=0 max=2 default=0 value=0
           write_isp_format (int)    : min=1 max=1 step=1 default=1 value=1
   sensor_signal_properties (u32)    : min=0 max=0 step=0 default=0 flags=read-only, has-payload
    sensor_image_properties (u32)    : min=0 max=0 step=0 default=0 flags=read-only, has-payload
  sensor_control_properties (u32)    : min=0 max=0 step=0 default=0 flags=read-only, has-payload
          sensor_dv_timings (u32)    : min=0 max=0 step=0 default=0 flags=read-only, has-payload
               sensor_modes (int)    : min=0 max=30 step=1 default=30 value=1 flags=read-only

PLease refer to

tegra_multimedia_api\samples2_camera_v4l2_cuda
tegra_multimedia_api\samples\v4l2cuda

09_camera_jpeg_capture and 10_camera_recording are for Bayer sensors using Tegra ISP engine.

Thank you for your answer.

I tried sample 12_camera_v4l2_cuda. but still I can’t get correct output in screen.
./camera_v4l2_cuda -d /dev/video0 -s 640x480 -f YUYV -n 30

below is output.

Hi wooleeyang,
We suggest you try USB cameras and do comparison betewwn the two cases. These samples are verified with Logitech C930 and e-con See3CAM_CU135

Our camera is verified using below command.

capture : v4l2-ctl --set-fmt-video=width=1824,height=940,pixelformat=YUYV --set-ctrl bypass_mode=0 --stream-mmap --stream-count=100 -d /dev/video0 --stream-to=ov491.raw

play: mplayer ov491.raw -demuxer rawvideo -rawvideo w=1824:h=940:fps=30:format=uyvy

I think the problem is how our camera work with nvidia sample application and nvidia argus lib is suppoting it now.
for example, I have doubt if nvidia renderer framework supporting yuyv.

Hi wooleeyang,
With ‘-n 30’, it dumps one YUYV. Is the dumped YUYV good?

The sample is implemented in V4L2 standard. If you can get good YUYV with v4l2-ctl, it should be same for running the sample.
One thing can be an issue is the resolution. 1824x940 is not common and suggest you try standard resolution like 1280x720.

Hi DaneLLL.
The dumped YUYV is correct. I checked it using below command.
mplayer -demuxer rawvideo -rawvideo w=1824:h=940:format=uyvy camera.YUYV -loop 0
our custom camera has fixed resolution, so can’t change it

So I think the next steps are VIC converting and rendering.
Could you help me more to solve this problem?

Where can I get source code for nvidia framework? for example function NvBufferCreateEx implementation?
I need to debug it.
this sample app looks convert “yuyv camera buffer” to “yuv420 render buffer” right?
In my analysis, almost nvidia provided samples are using YUV420 and NV12 as rendering buffer format. right?
When I change rendering buffer format to yuyv(NvBufferColorFormat_YUYV), 1285 memory error has printed.
[ERROR] (NvEglRenderer.cpp:393) glDrawArrays arrays failed:1285

Hi below is my today’s test.
yuyv raw data looks good. but rendered data and converted data are something wrong.

./camera_v4l2_cuda -d /dev/video0 -s 1824x940 -f YUYV -n 30
[b]==> result: https://ibb.co/iicLe8[/b]
mplayer -demuxer rawvideo -rawvideo w=1824:h=940:format=uyvy camera.YUYV -loop 0
[b]==> result: https://ibb.co/bKMSsT[/b]

./video_convert camera.YUYV 1824 940 YUYV test.yuv 1824 940 ABGR32
mplayer -demuxer rawvideo -rawvideo w=1824:h=940:format=rgb32 test.yuv -loop 0
[b]==> result: https://ibb.co/msyRK8[/b]

According to above test, it may be a resolution problem.
If then, how to add our custom resolution to support in nvidia platform?

Hi wooleeyang,

Please attache “camera.YUYV” file to us, we want to try the issue with your command:

./video_convert <b>camera.YUYV</b> 1824 940 YUYV test.yuv 1824 940 ABGR32

Thanks!

Thank you.
Please check below link.

Hi wooleeyang,
Your camera.YUYV is actually UYVY. Please run

./video_convert camera.YUYV 1824 940 <b>UYVY</b> test.yuv 1824 940 ABGR32

Thank you.
I can see converted ABGR32 file now.
That means, rendering has issue in ./camera_v4l2_cuda
As I know, that sample application flow is as follows.

  1. read v4l2 buffer
  2. save file as raw(camera.YUYV)
  3. convert YUYV to YUV420
  4. rendering to display

With your help we verified it untill step 3.
But our rednering result is still not correct.

Could you check NvEglRenderer framework supporting custom resoultion?

Hi wooleeyang,
We suggest you render standard resolution such as 1280x720.

Or you may check the source code:

tegra_multimedia_api/samples/common/classes/NvEglRenderer.cpp

Hi DaneLLL
I tried with
./camera_v4l2_cuda -d /dev/video0 -s 1280x720 -f UYVY -n 30

but result is same as before.

Hi wooleeyang,
Do you modify the sample to fit your case? Not sure but you should configure v4l2 buffer into 1824x940 and render buffer into 1280x720.

hmm. I used sample application without any change with below command.
./camera_v4l2_cuda -d /dev/video0 -s 1280x720 -f YUYV -n 30

I think v4l2 already know of my resolution. isn’t it?
So I thought camera_v4l2_cuda application make camera.YUYV raw file as a result of it.
I could proof camera.YUYV using 7yuv program. After set resolution 1824/940 in 7yuv program, I could see correct image.

nvidia@:$ v4l2-ctl -d /dev/video0 --all
Driver Info (not using libv4l2):
Driver name : tegra-video
Card type : vi-output, ov491 2-0024

Format Video Capture:
Width/Height : 1824/940
Pixel Format : ‘YUYV’

Size Image : 3429120
Colorspace : sRGB

Hi,
The device tree seems wrong. You should have pixel format UYVY shown in v4l2-ctl, not YUYV

Hi DaneLLL
I fixed the kernel to change v4l2 information.
I can see UYVY instead YUYV in v4l2-ctl.
but camera_v4l2_cuda application result is same.

nvidia@:$ cat /proc/device-tree/i2c@3180000/ov491_a@24/mode0/pixel_t
uyvy
nvidia@:$

nvidia@:$ v4l2-ctl -d /dev/video0 --all
Video input : 0 (Camera 0: no power)
Format Video Capture:
Width/Height : 1824/940
Pixel Format : ‘UYVY’
Field : None
Bytes per Line : 3648
Size Image : 3429120
Colorspace : sRGB
Transfer Function : Default
YCbCr Encoding : Default
Quantization : Default

Hi wooleeyang,
Have you tried to render in 1280x720? Need to modify

// Create EGL renderer
    ctx->renderer = NvEglRenderer::createEglRenderer("renderer0",
            <b>1280, 720</b>, 0, 0);
[b]    input_params.width = 1280;
    input_params.height = 720;[/b]
    // Create Render buffer
    if (-1 == NvBufferCreateEx(&ctx->render_dmabuf_fd, &input_params))
        ERROR_RETURN("Failed to create NvBuffer");