how to utilize on board camera?

it was luvcview utilite which I used at Jetson JTK1,
there on install with apt :
[block]it returns:[/block]

sudo apt-get install luvcview
Reading package lists... Done
Building dependency tree       
Reading state information... Done
E: Unable to locate package luvcview

on attempt to assemble it with

git clone https://github.com/ksv1986/luvcview

it doesnt appear to have a ./configure option, and it advises to install libsdl which doesn’t seem to be available via apt.
On attempt to install libsdl from git:

git clone https://github.com/dhewg/libsdl
./configure
checking build system type... build-scripts/config.guess: unable to guess system type

This script, last modified 2009-09-18, has failed to recognize
the operating system you are using. It is advised that you
download the most up to date version of the config scripts from

  http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.guess;hb=HEAD
and
  http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.sub;hb=HEAD

If the version you run (build-scripts/config.guess) is already up to date, please
send the following data and any information you think might be
pertinent to <config-patches@gnu.org> in order to provide the needed
information to handle your system.

now I did copy paste text from http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.sub;hb=HEAD
to the config.guess file which I approached with mv config.guess config.guess.old and than with nano config.guess created it
and it returns

./configure
checking build system type... config.guess: missing argument
Try `config.guess --help' for more information.
configure: error: cannot guess build type; you must specify one

neither cheese display anything, but:

(cheese:26206): cheese-WARNING **: Can't record audio fast enough: gstaudiobasesrc.c(869): gst_audio_base_src_create (): /GstCameraBin:camerabin/GstAutoAudioSrc:audiosrc/GstPulseSrc:audiosrc-actual-src-puls:
Dropped 74088 samples. This is most likely because downstream can't keep up and is consuming samples too slowly.


(cheese:26206): cheese-WARNING **: Can't record audio fast enough: gstaudiobasesrc.c(869): gst_audio_base_src_create (): /GstCameraBin:camerabin/GstAutoAudioSrc:audiosrc/GstPulseSrc:audiosrc-actual-src-puls:
Dropped 74088 samples. This is most likely because downstream can't keep up and is consuming samples too slowly.


(cheese:26206): cheese-WARNING **: Can't record audio fast enough: gstaudiobasesrc.c(869): gst_audio_base_src_create (): /GstCameraBin:camerabin/GstAutoAudioSrc:audiosrc/GstPulseSrc:audiosrc-actual-src-puls:
Dropped 74529 samples. This is most likely because downstream can't keep up and is consuming samples too slowly.

Inside NvxLiteH264DecoderLowLatencyInitNvxLiteH264DecoderLowLatencyInit set DPB and MjstreamingInside NvxLiteH265DecoderLowLatencyInitNvxLiteH265DecoderLowLatencyInit set DPB and MjstreamingNvMMLiteOpen : Block : BlockType = 278 
TVMR: NvMMLiteTVMRDecBlockOpen: 7818: NvMMLiteBlockOpen 
NvMMLiteBlockCreate : Block : BlockType = 278 
TVMR: cbBeginSequence: 1190: BeginSequence  2592x1952, bVPR = 0
TVMR: LowCorner Frequency = 345000 
TVMR: cbBeginSequence: 1583: DecodeBuffers = 4, pnvsi->eCodec = 9, codec = 8 
TVMR: cbBeginSequence: 1654: Display Resolution : (2592x1944) 
TVMR: cbBeginSequence: 1655: Display Aspect Ratio : (2592x1944) 
TVMR: cbBeginSequence: 1697: ColorFormat : 5 
TVMR: cbBeginSequence:1711 ColorSpace = NvColorSpace_YCbCr601
TVMR: cbBeginSequence: 1839: SurfaceLayout = 3
TVMR: cbBeginSequence: 1936: NumOfSurfaces = 8, InteraceStream = 0, InterlaceEnabled = 0, bSecure = 0, MVC = 0 Semiplanar = 1, bReinit = 1, BitDepthForSurface = 8 LumaBitDepth = 8, ChromaBitDepth = 8, ChromaFormat = 5
TVMR: cbBeginSequence: 1938: BeginSequence  ColorPrimaries = 2, TransferCharacteristics = 2, MatrixCoefficients = 2
Allocating new output: 2592x1952 (x 8), ThumbnailMode = 0
TVMR: TVMRFrameStatusReporting: 6266: Closing TVMR Frame Status Thread -------------
TVMR: TVMRVPRFloorSizeSettingThread: 6084: Closing TVMRVPRFloorSizeSettingThread -------------
TVMR: TVMRFrameDelivery: 6116: Closing TVMR Frame Delivery Thread -------------
TVMR: NvMMLiteTVMRDecBlockClose: 8018: Done

It seems that I got found a sequence:

gst-launch-1.0 nvcamerasrc ! 'video/x-raw(memory:NVMM), width=640, height=480, framerate=30/1, format=NV12' ! nvvidconv flip-method=2 ! nvegltransform ! nveglglessink -e

what wrong with the sample:

Usage: camera_v4l2_cuda [OPTIONS]

	Example: 
	./camera_v4l2_cuda -d /dev/video0 -s 640x480 -f YUYV -n 30 -c

it returns:

./camera_v4l2_cuda -d /dev/video0 -s 640x480 -f YUYV -n 30 -c
ERROR: camera_initialize(): (line:233) Failed to set camera output format: Invalid argument (22)
ERROR: init_components(): (line:331) Failed to initialize camera device
ERROR: main(): (line:714) Failed to initialize v4l2 components
App run failed

Hi Andrey
This sample code is for YUV sensor use case. The on board camera not working due to it’s Bayer sensor.

Thank you for letting me know that!

How to use the camera with webskype?

Hi Andrey
I believe that won’t be support. Because it’s not like uvc camera.

the

gst-launch-1.0 nvcamerasrc ! 'video/x-raw(memory:NVMM), width=640, height=480, framerate=30/1, format=NV12' ! nvvidconv flip-method=2 ! nvegltransform ! nveglglessink -e

does play a camera view;
How to create a recording of that to a file?

What the heck is a “Bayer” camera?

“Bayer” camera seems to be a camera which utilizes the Bayer filter Bayer filter - Wikipedia
RGB

uvc : USB video device class - Wikipedia

History lesson time!

Because of the way NTSC color TV was introduced as a hack on top of black-and-white, color information in video (“chroma” in vernacular – but that term also has a different technical definition) is recorded at half the bandwidth (resolution) of the brightness (“luma” in vernacular) information.

Also, because of the way the NTSC color TV works, video was split into a higher-bandwidth brightness signal, and a two-dimensional lower-bandwidth “color delta” signal. This is often referred to as YUV, YU’V’, YCbCr, and other similar names. (Again, there are vernacular uses of these names, as well as well-defined technical terms that are overlapping – too much for this post.) These are different from the typical RGB formats used by computer displays.

This separation is actually alright, because it turns out that our eyes are much more sensitive to green light than to red / blue light. Ask two evolutionary biologists why in the same room, and you may bet a number of interesting takes on this :-) (Or they might both agree, which would be boring.)

The “half bandwidth” version is typically found as YUV422, where “4” really means “4 MHz bandwidth” and “22” really means “2 MHz bandwidth for each color sub-signal.” The MHz come from the bandwidth allocation in the NTSC signals, and modern high-resolution displays use much more than that, but the ratios still persist.
Turns out, NTSC could only fit 6 MHz in a broadcast signal, so the color got further sub-sampled to the 411 format – 1 MHz each for U/V (or Cb/Cr in computer color space.) Turns out, subsampling four horizontal pixels to a single color pixel doesn’t look that great, so computer people came up with a similar allocation that instead subsamples a 2x2 pixel area, and still uses the same bandwith; this is colloquially known as “420” format. (See also: JPEG, and many other computer image representations)

OK, so to generate these signals, a typical sensor (the chip that the camera uses to see light) will arrange its three-colored pixels in an order like:


But, it turns out, manufacturing that very finely spaced sensor becomes harder and harder as chip sizes go down and megapixels go up. (I’m not a big fan of tons of megapixels; I’d rather have bigger pixels with less noise and higher dynamic range, but that doesn’t sound as sexy in marketing materials, so I lose on that.)
So, enter the Bayer pattern of sampling only two colors per row of pixels:


The camera sensor then looks at the intersection of each of the pixel boundaries, and uses the two green pixels, one red pixel, and one blue pixel that each intersection borders, to calculate the final color of the output pixel. This is approximately the same color information as a typical 422 signal, but it trades the

So, the problem is, the “raw” output of a Bayer sensor is not compatible with expected YUV422 formats that some programs may be hard-coded to expect. An external camera (such as a USB camera) that is expected to be compatible with a wide variety of pre-existing software will contain additional processing that turns the Bayer sensor data into whatever format the host application wants (typically YCbCr or RGB.) Embedded cameras (such as cell phones, and here the Jetson camera) do not add that processing on the camera board, but instead do this in the host.

Once the processing in the host is done in custom host electronics (maybe the image processing units on the Jetson, if I remember correctly?) it turns out to be harder to write a video4linux driver that gives what appears to be “raw” access to the camera, yet supports using the offload hardware to de-Bayer the image. Hence, why I think the current NVIDIA driver doesn’t expose YUV/YCbCr/RGB in the “raw” v4l2 driver.

There are two solutions to this problem:

  • Do fancy footwork in the driver to provide the output of the image processing unit as a "raw" v4l2 video stream with flexible formats. This probably has all kinds of interesting internal resource allocation problems for the driver developer, and NVIDIA probably have decided to put their scarce engineering time on other features that are also important.
  • Use a software Bayer -> YUV converter, and burn CPU cycles to make the software compatible. This helps users who "Must make it work" but is a terrible way of building an embedded system for a systems integrator. Given that the target for Jetson is embedded systems where the vendor controls the installable software, burning CPU cycles (and thus power) on this is probably not a priority for NVIDIA.

Now enter video4linux2, the video API for Linux that’s been around for a long time (v4l2.) It has two parts: a driver API which lets you query/define formats, and set up streams of video data buffers to capture, and a helper library that takes care of some of the arcane low-level gruntwork of talking to the drivers. (Personally, I find the driver API to be fine “raw” but many applications also use the user-level library.)

There exists a generic software format converter for v4l2. It works for applications that use the v4l2 drivers using the v4l2 API, but depending on how the application uses the device, it may or may not be compatible.
You can enable this for a particular start of a particular program by starting the program with:

LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libv4l/v4l2convert.so /usr/local/bin/skype

(or whatever/wherever your program is)
This will tell the system to pre-install the “v4l2convert.so” module into the process, and this module hijacks the v4l connections it can see and attempts to match up the format expectations through software conversion.

Sometimes this works, and sometimes not.

End history lesson.

a friend of mine suggested to research "sudo apt-get install v4l2loopback
" option with ffmpeg to achieve the webcam translation feature

Colleague shared the method:

aptitude install v4l2loopback-source module-assistant

module-assistant auto-install v4l2loopback-source\

[gigimushroom]
but in my case it fails on execution of the second line, the /dev/video0 device seems created though

resolved ;
webskype/hangouts can be used via web-browser:

  1. load v4loopback module
  2. pass nvcamerasrc stream to the device crated
  3. open chromium and skype/hangouts see the onboard camera
    References:
    [url]Error building v4l2loopback - Jetson TX1 - NVIDIA Developer Forums
    [url]https://devtalk.nvidia.com/default/topic/994012/jetson-tx1/passing-gst-launch-from-nvcamerasrc-to-v4l2sink/post/5090232/#5090232[/url]
    [url]How to create loopback device for nvcamerasrc for TX1? - Jetson TX1 - NVIDIA Developer Forums
    [url]https://devtalk.nvidia.com/default/topic/1019504/jetson-tx2/how-to-create-loopback-device-for-nvcamerasrc-for-tx2-/post/5208126/#5208126[/url]