We would like to introduce some basic approaches to test camera functionality with the Jetson TX1 on L4T R23.2.
Note: Regarding how to bring up a new sensor on Jetson TX1, please refer to the “Video for Linux User Guide” chapter in the release document, which is beyond what we discuss here: http://developer.nvidia.com/embedded/dlc/l4t-documentation-23-2
1. Using the gstreamer “nvcamerasrc” plugin
This plugin is implemented by NVIDIA. It supports many options to control NVIDIA ISP properties (as shown in gst-inspect-1.0 nvcamerasrc
). This way, we can enable NVIDIA ISP post-processing for Bayer sensor, perform format conversion, or output directly for YUV sensors or USB cameras.
Note that, in order to enable this path, there are the below prerequisites:
- Enable CONFIG_VIDEO_TEGRA_VI_BYPASS and disable CONFIG_VIDEO_TEGRA_VI2 from Kconfig
- Expose camera resolution and csi pad information to the user library
Refer to the dts files below:
arch/arm64/boot/dts/tegra210-platforms/tegra210-jetson-cv-camera-e3323-a00.dtsi
arch/arm64/boot/dts/tegra210-platforms/tegra210-camera-e3323-a00.dtsi
Example: Bayer sensor (1920x1080/30/BGGR)
- Save preview into a file ``` $ gst-launch-1.0 nvcamerasrc num-buffers=200 sensor-id=0 ! 'video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1, format=NV12' ! omxh264enc ! qtmux ! filesink location=test.mp4 -ev ```
- Render preview to an HDMI screen ``` $ gst-launch-1.0 nvcamerasrc sensor-id=0 ! 'video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1, format=NV12' ! nvhdmioverlaysink -ev ```
2. Using the open-source “v4l2src” plugin
This path is usually for YUV sensors or USB cameras to output YUV images without NVIDIA ISP post-processing, and therefore, it doesn’t involve any camera software stack powered by NVIDIA.
Example: USB camera (480P@30/YUY2)
- Save preview into a file (with software-based format conversion) ``` $ gst-launch-1.0 v4l2src num-buffers=200 device=/dev/video0! 'video/x-raw, format=YUY2, width=640, height=480, framerate=30/1' ! videoconvert ! omxh264enc ! qtmux ! filesink location=test.mp4 -ev ```
- Render preview to the screen ``` //$ export DISPLAY=:0 if you are operating from remote console $ gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw, format=YUY2, width=640, height=480, framerate=30/1' ! xvimagesink -ev ```
Example: YUV sensor (480P/30/UYVY)
- Save preview into a file (with hardware-based format conversion) ``` $ gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw, format=(string)UYVY, width=(int)640, height=(int)480, framerate=(fraction)30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! omxh264enc ! qtmux ! filesink location=test.mp4 -ev ```
- Render preview to the screen ``` //$ export DISPLAY=:0 if you are operating from a remote console $ gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw, format=(string)UYVY, width=(int)640, height=(int)480, framerate=(fraction)30/1' ! xvimagesink -ev ```
3. App invoking V4L2 ioctl directly
This path is a more “pure” method than the second one, used to verify basic functionality during sensor bring-up.
Example: YUV sensor (480P/30/UYVY)
$ ./yavta /dev/video0 -c1 -n1 -s640x480 -fUYVY -Fcam.raw