Some questions about the multimedia

Hi,

In the process of studying about the miltmedia, I still have a few questions:

  1. We use the sensor which has ISP, so for us, the raw data is YUV. After checking the TRM, whether the streaming data flow for us is Sensor → NVCSI → VI → Memory?

  2. I have checked the muiltmedia architecture, but I am not clearly about the relation between device driver and CSI/VI. What’s the relation between the call? Is there a example?

Thanks.

hello arknights,

if you use the sensor which has built-in ISP, it’ll output YUV formats.
hence, you’re correct that the streaming pipeline will be Sensor → NVCSI → VI → Memory.

please refer to Camera Architecture Stack, there’re two different approaches to access camera drivers, v4l2src and nvarguscamerasrc.
the difference is that nvarguscamerasrc only available for [Camera Core], which also means Tegra ISP.
since your sensor already output YUV formats. you’ll need to go through v4l2src.

you may also check [L4T Multimedia API Reference] for samples,
12_camera_v4l2_cuda demonstrates how to capture images from a V4L2 YUV type of camera.
thanks

Hi JerryChang,

What’s the difference between VI3 and VI4?
I check “NVIDIA Tegra X1 Mobile Processor TRM”, it said VI3. But I find source about VI4 in kernel folder.
Does Nano support VI4?

Thanks.

hello arknights,

it’s a hardware unit from TRM, actually, from the driver side it uses vi2_fops.c for both TX1 and Nano platforms.
in addition, TX2 use VI4 and Xavier working with VI5.
thanks

Hi JerryChang,

  1. When the streaming data leave VI to memory, how to the V4L2 Framework get it and use it with sensor driver?
    I want to make clear the streaming data flow.

  2. If we use the sensor which has built-in ISP, does the septs about using this sensor is the same with “Sensor Software Driver Programming Guide” in L4T docs? (add an appropriate device tree node in the kernel and develop a V4L2 sensor driver)

Thanks.

hello arknights,

it’s VI engine to allocate video buffers for processing. you could access camera sensor with the gstreamer pipelines.
you might refer to Approaches for Validating and Testing the V4L2 Driver.
for example,

$ gst-launch-1.0 -v v4l2src device=/dev/video0 ! 'video/x-raw, format=(string)UYVY, width=(int)640, height=(int)480, framerate=(fraction)30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! omxh264enc ! qtmux ! filesink location=test.mp4  -ev

for built-in ISP sensors, you’re still need to implement sensor driver for power-on/off sequence, operation controls.
you should also implement sensor device tree for sensor capabilities, such as pixel clock, resolutions,…etc.
we don’t have reference drivers with YUV sensors, please contact with Jetson Preferred Partners for camera solutions.
thanks

Hi JerryChang,

What’s the main function of Host1x in the following “Direct V4L2 Interface” way?

I see there are double sided arrow link Host1x to CSI/VI and Host1x to V4L2 Framework.

In addition, I want to confirm whether the codes about V4L2 Framework are in /kernel/nvidia/drivers/media/platform/tegra/camera/?

Thanks.

hello arknights,

  1. it’s a host controller for transmit synchronization between software buffers and hardware signals.
    you may also access TegraX2 Technical Reference Manual, and you should check [Chapter-19: Host Controller] for more details.
    if you’re looking for how host1x drivers to sync with camera hardware, please check below kernel sources for details.
    for example,
<i>$l4t-r32.2/public_sources/kernel_src/kernel/nvidia/drivers/video/tegra/host/nvhost_syncpt.c</i>

u32 nvhost_syncpt_incr_max_ext(struct platform_device *dev, u32 id, u32 incrs)
{
        struct nvhost_master *master = nvhost_get_host(dev);
        struct nvhost_syncpt *sp =     
                nvhost_get_syncpt_owner_struct(id, &master->syncpt);
        return nvhost_syncpt_incr_max(sp, id, incrs);
}
  1. I want to confirm whether the codes about V4L2 Framework are in /kernel/nvidia/drivers/media/platform/tegra/camera/?

you might check below sources for a generic v4l2 frameworks,
$l4t-r32.2/public_sources/kernel_src/kernel/kernel-4.9/drivers/media/v4l2-core/*

however,
we also have implementation of a camera_common driver, which create a set of kernel functions that are used by the camera drivers in the NVIDIA kernel and also by V4L2 drivers.
for example,
$l4t-r32.2/public_sources/kernel_src/kernel/nvidia/drivers/media/platform/tegra/camera/camera_common.c

suggest you should also access [Sensor Software Driver Programming Guide], check the V4L2 Kernel Driver (Version 1.0), or Version 2.0 for descriptions and also related reference drivers.
thanks

Hi JerryChang,

Is it the following meaning?

NVIDIA extension codes above the v4l2 frameworks is in
$l4t-32.3.1/Linux_for_Tegra/source/public/kernel/nvidia/drivers/media/platform/tegra/
v4l2 frameworks is in
$l4t-r32.2/public_sources/kernel_src/kernel/kernel-4.9/drivers/media/v4l2-core/

I’m not sure the relationship between them, and I use “NVIDIA extension codes above the v4l2 frameworks”, if it is wrong, please corret me. I want to know their call relationship if they have.

If need to use a new sensor to tegra, we just refer to “Sensor Software Driver Programming Guide” step by step, add it to the tegra platform. About other things, NVIDIA extension codes and v4l2 frameworks will do.

Thanks for your patient explanation.

that’s correct.

Hi JerryChang,

After checking Tegra_X1 TRM, I know CSI can support up to 12 MIPI lanes data/pixel streaming.
And it can support up to 6 2-lanes-camera module to work, there are 6 CSI-Pixel Parsers in CSI.
When I use camera capturing, what’s the difference between input and output for CSI?
What does CSI do for streaming data before it to VI?

Thanks.

hello arknights,

you might refer to Port Index session which specify the connection diagrams.
thanks