Typical approaches to test camera functionality for L4T R23.2 on Jetson TX1

We would like to introduce some basic approaches to test camera functionality with the Jetson TX1 on L4T R23.2.

Note: Regarding how to bring up a new sensor on Jetson TX1, please refer to the “Video for Linux User Guide” chapter in the release document, which is beyond what we discuss here: http://developer.nvidia.com/embedded/dlc/l4t-documentation-23-2

1. Using the gstreamer “nvcamerasrc” plugin

This plugin is implemented by NVIDIA. It supports many options to control NVIDIA ISP properties (as shown in gst-inspect-1.0 nvcamerasrc). This way, we can enable NVIDIA ISP post-processing for Bayer sensor, perform format conversion, or output directly for YUV sensors or USB cameras.

Note that, in order to enable this path, there are the below prerequisites:

  • Enable CONFIG_VIDEO_TEGRA_VI_BYPASS and disable CONFIG_VIDEO_TEGRA_VI2 from Kconfig
  • Expose camera resolution and csi pad information to the user library

Refer to the dts files below:

arch/arm64/boot/dts/tegra210-platforms/tegra210-jetson-cv-camera-e3323-a00.dtsi
arch/arm64/boot/dts/tegra210-platforms/tegra210-camera-e3323-a00.dtsi

Example: Bayer sensor (1920x1080/30/BGGR)

  1. Save preview into a file ``` $ gst-launch-1.0 nvcamerasrc num-buffers=200 sensor-id=0 ! 'video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1, format=NV12' ! omxh264enc ! qtmux ! filesink location=test.mp4 -ev ```
  2. Render preview to an HDMI screen ``` $ gst-launch-1.0 nvcamerasrc sensor-id=0 ! 'video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1, format=NV12' ! nvhdmioverlaysink -ev ```

2. Using the open-source “v4l2src” plugin

This path is usually for YUV sensors or USB cameras to output YUV images without NVIDIA ISP post-processing, and therefore, it doesn’t involve any camera software stack powered by NVIDIA.

Example: USB camera (480P@30/YUY2)

  1. Save preview into a file (with software-based format conversion) ``` $ gst-launch-1.0 v4l2src num-buffers=200 device=/dev/video0! 'video/x-raw, format=YUY2, width=640, height=480, framerate=30/1' ! videoconvert ! omxh264enc ! qtmux ! filesink location=test.mp4 -ev ```
  2. Render preview to the screen ``` //$ export DISPLAY=:0 if you are operating from remote console $ gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw, format=YUY2, width=640, height=480, framerate=30/1' ! xvimagesink -ev ```

Example: YUV sensor (480P/30/UYVY)

  1. Save preview into a file (with hardware-based format conversion) ``` $ gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw, format=(string)UYVY, width=(int)640, height=(int)480, framerate=(fraction)30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! omxh264enc ! qtmux ! filesink location=test.mp4 -ev ```
  2. Render preview to the screen ``` //$ export DISPLAY=:0 if you are operating from a remote console $ gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw, format=(string)UYVY, width=(int)640, height=(int)480, framerate=(fraction)30/1' ! xvimagesink -ev ```

3. App invoking V4L2 ioctl directly

This path is a more “pure” method than the second one, used to verify basic functionality during sensor bring-up.

Example: YUV sensor (480P/30/UYVY)

$ ./yavta /dev/video0 -c1 -n1 -s640x480 -fUYVY -Fcam.raw

Hi nVConan

After building the source code for v4l2 support, I’m not able to use nvcamerasrc.

$ gst-launch-1.0 nvcamerasrc num-buffers=200 sensor-id=0 ! ‘video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1, format=NV12’ ! omxh264enc ! qtmux ! filesink location=test.mp4 -ev

It is giving error when i launch gstreamer with nvcamerasrc.

Here you are taking about enabling and disabling some macros in Kconfig file

Note that, in order to enable this path, there are below prerequisites,
Enable CONFIG_VIDEO_TEGRA_VI_BYPASS and disable CONFIG_VIDEO_TEGRA_VI2 from Kconfig
Expose camera resolution and csi pad information to user library

Do you mean that while building kernel for v4l2, there is a provision to do the above steps.
I was modifying tegra21_defconfig file to enable and disable some macros(followed Video for Linux User Guide in documents).

Could you please explain me the mistake I have done while building, so that I can use nvcamerasrc along with v4l2 support kernel.

If possible could you please elaborate the prerequisite steps, so than I can follow

We support two kinds of modes to program VI, corresponding to CONFIG_VIDEO_TEGRA_VI_BYPASS and CONFIG_VIDEO_TEGRA_VI2.
You can select either one to receive CSI stream from external camera, and as you may notice that CONFIG_VIDEO_TEGRA_VI2 is the default option for L4T R23.2 kernel release.
If you want to use the first approach(nvcamerasrc), you have to enable CONFIG_VIDEO_TEGRA_VI_BYPASS and disable CONFIG_VIDEO_TEGRA_VI2; Vice versa, if you want to use the last two approaches(v4l2src and yavta), you have to disable CONFIG_VIDEO_TEGRA_VI_BYPASS and enable CONFIG_VIDEO_TEGRA_VI2.

Additional, you can use below command to check current mode in runtime,

ubuntu@tegra-ubuntu:~$ ./yavta  -l /dev/video0 
Device /dev/video0 opened.
Device `<b>vi-bypass</b>' on `' is a video capture device.
--- User Controls (class 0x00980001) ---
control 0x00980913 `Gain' min 0 max 16 step 1 default 1 current 1.
--- Camera Controls (class 0x009a0001) ---
control 0x009a2000 `Frame Length' min 1 max 65535 step 1 default 1 current 1.
control 0x009a2001 `Coarse Time' min 1 max 65535 step 1 default 1 current 1.
control 0x009a2003 `Group Hold' min 0 max 1 step 1 default 0 current 0.
  0: 0
  1: 1
4 controls found.
Video format: UYVY (59565955) 640x480 (stride 1280) buffer size 614400

Let me know if it addresses you problem.

Could you please confirm me these.

  1. You are talking about Kconfig file, but I enabled and disabled Macros in tegra21_defconfig. Are they both same? If not could you please tell me which Kconfig you are talking about.

  2. I understood that, If I want v4l2src I can’t use nvcamerasrc. Is my understanding correct?

And also when I try to test " Using OSS plugin", I’m facing these errors.

ubuntu@tegra-ubuntu:~$ gst-launch-1.0 v4l2src num-buffers=200 device=/dev/video1! 'video/x-raw, format=YUYV, width=640, height=480, framerate=30/1' ! videoconvert ! omxh264enc ! qtmux ! filesink location=test.mp4 -ev

(gst-launch-1.0:3540): GStreamer-CRITICAL **: gst_element_make_from_uri: assertion 'gst_uri_is_valid (uri)' failed
WARNING: erroneous pipeline: no element "video"

/dev/video1 is Logitech webcam and it’s pixelformat is YUYV

ubuntu@tegra-ubuntu:~$ gst-launch v4l2src device=/dev/video1 ! xvimagesink
Setting pipeline to PAUSED ...
ERROR: Pipeline doesn't want to pause.
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Failed to query attributes of input 0 in device /dev/video1
Additional debug info:
v4l2_calls.c(142): gst_v4l2_fill_lists (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
Failed to get 0 in input enumeration for /dev/video1. (25 - Inappropriate ioctl for device)
Setting pipeline to NULL ...
Freeing pipeline ...

I’m facing the same errors with default ov5693 on-board cam also.
Could you please help regarding these errors.

[Reply Q1] You can enable config either from tegra21_defconfig or Kconfig resides drivers/media/platform/soc_camera/tegra_camera/
[Reply Q2] Correct
[Reply error 1] You miss space behind ‘device=/dev/video1’ and before separator ‘!’.
[Reply error 2] You should use gst-launch-1.0 instead of gst-launch.

Hi Conan,

Your post helps to understand the TX1 ISP a bit further. But I have a few doubts. Could you please clarify?

  1. We have a sensor driver which utilizes the soc_camera/v4l2 path and it works absolutely fine. Now I wish to interface the same sensor to the TX1 ISP and test its capabilities using the nvcamerasrc plugin provided by Nvidia. Does the nvcamera-daemon support other sensors as well? If it does, can I get more info on what formats are supported etc.?

I guess I would have to write a new sensor driver which uses the custom ioctls as used by the daemon currently. Or would the v4l2 based sensor driver work?

I’m using the default binaries that shipped with my TX1 (L4T_R23.1). I see that the config used is contradictory to your statement but nvcamerasrc works fine with OV5693. Could you explain why?

ubuntu@tegra-ubuntu:~$ zcat /proc/config.gz | grep -e VI_BYPASS -e VI2 -e OV5693
CONFIG_VIDEO_OV5693=y
CONFIG_VIDEO_TEGRA_VI2=y
# CONFIG_VIDEO_TEGRA_VI_BYPASS is not set
# CONFIG_SOC_CAMERA_OV5693 is not set
ubuntu@tegra-ubuntu:~$ cat /etc/nv_tegra_release | grep REVISION
# R23 (release), REVISION: 1.1, GCID: 6220086, BOARD: t210ref, EABI: hard, DATE: Fri Oct 30 21:33:12 UTC 2015
ubuntu@tegra-ubuntu:~$ uname -a
Linux tegra-ubuntu 3.10.67-g3a5c467 #1 SMP PREEMPT Fri Oct 30 14:26:17 PDT 2015 aarch64 aarch64 aarch64 GNU/Linux
  1. Is it possible to obtains source code or documentation related to any of these components?
    → nvcamerasrc
    → nvcamera-daemon

Any info would be helpful for further develeopment and understanding of the ISP and how it works. Thank you.

Hi dilipkumar25,

Those are great questions and I believe they will be helpful for others to understand this topic, so thanks.
Before I respond to your questions, I’d like to introduce the processing flow.
From top to bottom,
APP → nvcamerasrc → nvcamera-daemon <–> camera core stack → physical camera layer → soc_camera based V4l2 driver
Some brief introductions to those components,

  • nvcamerasrc, camera gstreamer plugin implemented by NVIDIA. It supports lots of options to control NVIDIA ISP properties (as showing in `gst-inspect-1.0 nvcamerasrc`);
  • nvcamera-daemon, daemon process used to communicate with Camera Core Stack, in order to support multi-process as similar role of media-server in Android;
  • Camera Core Stack, consists of plenty of NVIDIA camera features/IP implementation, including NVIDIA ISP post-processing;
  • Answer to your questions, 1. nvcamera-deamon is a sensor-agnostic process, so definitely supports other sensors. 2. You can refer to the following datasheet(Chapter Video Input) to get the input data format tx1 supports, http://developer.nvidia.com/embedded/dlc/tegra-x1-data-sheet-for-jetson-tx1 3. You will find the soc_camera based v4l2 driver from(with suffix 'xxxx_v4l2.c')
    $TOP/kernel/drivers/media/i2c/soc_camera/
    

    In those reference drivers, we define some customer ctrls, which is similar as standard v4l2 driver, so I think there should no much effort to do the porting
    4. All above discussion is based on R23.2, so, please use this release for further development.
    5. So far, no plan to open source nvcamrasrc and nvcamera-daemon

    Let me know if they address your questions.

    Hi Conan,

    Thank you for the response. Your answers have given rise to more questions. Could you explain them as well?

    Is it alright to follow this driver architecture? I heard that the soc_camera driver is obsolete from L4T_R24.1 in favor of media controller framework and will be removed in future releases. I know the sensor driver requires no changes as it already uses v4l2-subdev, but what about vi_bypass.c etc?

    I checked the ov5693_v4l2.c driver. Are all these custom controls necessary for the nvcamerasrc plugin to work? AFAIK the driver supports streaming through the standard v4l2 ioctls itself right?

    Hence I take it that it is possible to interface all the formats supported by the soc_camera/v4l2 driver with the ISP using the gstreamer plugin. I’m mainly thinking YUV422. Is this correct?

    Is 23.2 recommended or 24.1? I’m confused because of the driver architecture change.

    Q1. Yes, soc_camera based v4l2 driver will be deprecated in future release.But I am afraid MCF(media controller framework) based v4l2 driver should be not yet available for R24.1.
    No matter what framework you are using, don’t need to pay close attention upon the vi2 or vi_bypass driver, since it’s maintained by NVIDIA all the time.

    Q2. If you choose nvcamerasrc, I’d like to recommend you to follow our reference driver to keep all custom controls to ensure our camera core stack working well.
    If you enable vi2 mode(as you know, it’s default mode for R23.2 release), you can use standard v4l2 ioctls to stream(such as yavta style).
    If you enable vi_bypass mode, you are no long able to use standard v4l2 ioctls to stream, since we will need to have user mode driver manipulating VI, CSI, ISP and etc(APP → nvcamerasrc → nvcamera-daemon <–> camera core stack → physical camera layer → soc_camera based V4l2 driver).
    Anyway, I am making a diagram to demonstrate how our camera stack work, once it’s done, will post on JEP forum.
    Hope everyone can understand it well after then.:)

    Q3. For input data format, yes, all the formats supported by v4l2 driver can be accepted by VI/ISP.
    For output data format, you can get the capabilities of the gstreamer plugin as per ‘gst-inspect-1.0 nvcamerasrc’.

    Q4. Sorry for the reorganization from our camera arch, but I believe it will become better and better. As I mentioned above, R24.1 probably keeps this framework(soc_camera based v4l2) along with R23.2.

    Alright, Thanks for the info Conan. Let me try out the 23.2 release and see what I can do.

    Conan,

    How is the block diagram coming along? Looking forward to understand it more. I have been bogged down with other work as well. So Im a bit delayed in testing the camera driver with ISP.

    Currently i have developed Parallel to MIPI driver using soc_camera/v4l2 approach. My sensor output is 1080p 60 FPS UYVY. And everything works perfectly.

    Is nvcamerasrc efficient in compare to v4l2src in terms of latency or any other benefits?

    Ritesh,
    I think with nvcamerasrc, you can take advantage of our ISP capability, which seems useless for your case.
    The other benefit we gain from nvcamerasrc is more compatible with some other gst components NVIDIA provides, especially for the memory layout.

    Hi Conan,

    Are the prerequisites mentioned in your inital post applicable for 24.2.1 as well?

    Enable CONFIG_VIDEO_TEGRA_VI_BYPASS and disable CONFIG_VIDEO_TEGRA_VI2 from Kconfig
    

    Regards,
    Rejeesh

    Rejeesh_QueST

    These 2 flags are for R23.2 and not applicable for R24.2.1. You could check kernel/arch/arm64/configs/tegra21_defconfig file. Old SOC_CAMERA is moved to media controller sensor driver architecture from R24.

    Thanks Chijen for the response.

    Are there updates to some of these basic approaches for R28.1? I’m testing the default OV5693 driver and I keep encountering “Internal data flow error” with GStreamer. It appears that CONFIG_VIDEO_TEGRA_VI_BYPASS and CONFIG_VIDEO_TEGRA_VI2 are no longer used in R28.1.

    Thanks.

    It is unclear to me what you want to do, but if you just want to test onboard camera, you may just try:

    gst-launch-1.0 nvcamerasrc ! nvvidconv ! xvimagesink
    

    Hi Nfeng,

    If i want to use nvcamerasrc with jestsontk1 and L4T21.5 , then it is feasible?
    I want to use ISP for bayer 2 rgb conversion.
    So can you provide me sample sensor driver code which support nvcamerasrc.

    I took reference of /KERNEL/driver/media/platform/tegra/ov5693.c code to develop os05a20 camera driver.

    In my defconfig only “CONFIG_VIDEO_TEGRA_VI2=y” and “CONFIG_VIDEO_TEGRA_VI_BYPASS” option not available.

    When i execute following pipeline then i face issue:
    “gst-launch-1.0 nvcamerasrc sensor-id=0 ! ‘video/x-raw(memory:NVMM),width=2688, height=1944, framerate=30/1, format=NV12’ ! nvoverlaysink -ev”

    error :
    Setting pipeline to PAUSED …
    Inside NvxLiteH264DecoderLowLatencyInitNvxLiteH264DecoderLowLatencyInit set DPB and MjstreamingNvCamHwHalOpen: Failure to open Camera Kernel Node
    NvOdmImagerDeviceDetect: Failed to open HW HAL.
    NvCamHwHalOpen: Failure to open Camera Kernel Node
    NvOdmImagerGetModuleList: Camera node not detected - No such file or directory
    IMX135 **** Can not open camera device: No such file or directory
    NvOdmImagerOpenExpanded 462: Sensor ERR
    NvOdmImagerOpenExpanded FAILED!
    camera_open failed
    ERROR: Pipeline doesn’t want to pause.
    Setting pipeline to NULL …
    Freeing pipeline …

    please help me on following questios:

    1. is it necessary to enable “CONFIG_VIDEO_TEGRA_VI_BYPASS” ? if yes then why it is not displayed in defconfig, is there anything missing?
    2. can you please provide me camera reference code which supports nvcamerasrc and ISP?

    TK1 not support nvcamerasrc.