Working v4l2 sample for Raspi Cam in C/C++?

Hi,

I’ve got existing code for the RPi based around v4l2 that I’m trying to port to the Nano.

I’ve connected a Rpi V2 NoIR camera and confirmed that it’s working with gstreamer.

Unfortunately, I’m not having any luck with the supplied camera_v4l2_cuda example. I’ve compiled it successfully, but any attempt to run it using the RPi cam, e.g.:

./camera_v4l2_cuda -d /dev/video0 -s 1280x720 -f MJPEG

results in an error like: “The desired format is not supported”. That command runs successfully on /dev/video1 which is a USB Webcam.

Would very appreciate if someone could point me to a working v4l2-based sample.

Thanks!

Dan

… small update. After poking around a bit more, I was able to coax the driver into revealing an acceptable format via

v4l2_ioctl(fd, VIDIOC_G_FMT...

and it confirms the output from

v4l2-ctl --list-formats-ext

that the RG10 is a valid pixel format, and it seems maybe the only one. I’m still puzzled though, since it’s a RAW/Bayer format, which is probably not terribly useful for most applications.

Is this just as far as the v4l2 driver has gotten for the Rpi camera?

This sample is for USB not for bayer sensor.
For the cuda sample code you can reference to /tegra_multimedia_api/argus/samples/cudaHistogram

I’m afraid I’m confused by your answer.

I’m looking for an example that uses v4l2 to capture frames from the Rpi camera. The example you mention doesn’t seem to use v4l2.

Can you please clarify?

Thx

First you need to know the sensor output is YUV or bayer sensor.
If it’s bayer sensor you can use v4l2 but it only can get the raw data and also need to modify the sample code otherwise it won’t working.

I’m asking specifically about the the V2 Raspberry Pi Camera. The sensor itself is obviously Bayer, like all color sensors, but I don’t know what outputs are generated natively from the HW and which ones in software.

The key thing is this: on the Raspberry Pi I can read a variety of output formats from the Camera via v4l2 (including YUV). I have not been able to do the same on the Nano.

So, I’m still wondering if it’s possible to get YUV (or any non-RAW format) output from the Raspberry Pi Camera via v4l2 on the nano?

Thanks.

There’s no way to get YUV from the v4l2-ctl due to this pipeline didn’t involve ISP.
For the ISP pipeline you need to run the nvarguscamerasrc or argus APP. Below is some document for camera framework.

https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%20Linux%20Driver%20Package%20Development%20Guide%2Fcamera_dev.html%23

Ok, thank for you the clear answer.