Some questions about the multimedia

Hi,

I’m a beginner for Jetson. I want to learn about multimedia. The following are my questions.

  1. Is the gstreamer-1.0 included in the Jetpack?

  2. What is the relationship among the components(libargus, nvarguscamerasrc and v4l2src)? What are their differences in use? I want to clarify their relationship.

  3. In the ds4.0 config file, the type of source 1 is Camera (V4L2), Is this used for the camera which does not use Tegra ISP (CSI Interface)?

Thanks.

Hi,

Yes. sdkmanager installs 3rdpary gstreamer packages and also NVIDIA-developed plugins.

libargus is in low-level tegra_multimedia_api. Samples are in /usr/src/tegra_multimedia_api. If you use DS4.0, you may stick with gstreamer. nvarguscamerasrc is for Bayer sensors which leverages ISP engine. v4l2src is for YUV sensors and USB cameras.

Yes. It is v4l2src.

Hi,

I found the camera architecture stack:
External Media
I still have two qusetions:

1.Why doesn’t V4L2 Application link V4L2src? What is the relationship between them?

2.Why do GStreamer Application link both V4L2src and nvarguscamerasrc? What is the calling process between them?

Thanks.

Hi,

V4L2 Applicaion is like below sample:

tegra_multimedia_api\samples\v4l2cuda

It runs frame capture through v4l2 interfaces.
v4l2src is a gstreamer plugin implemented based on v4l2 interfaces.

It shows Gstreamer Application can use either v4l2src or nvcamerasrc, not link both.

For example, you can run

$ gst-launch-1.0 nvarguscamerasrc ! nvoverlaysink

or

$ gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-raw,format=YUY2,width=848,height=480,framerate=30/1 ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvoverlaysink

Hi,

I know clearly now.
I have two qusetions:

  1. In the camera architecture stack, What is Camera Core? In other words, what’s file in it?

  2. I want to know whether the direct format from MIPI CSI Camera output is YUV?

Thanks.

Hi,

It is dirver lower then libnvargus.so.

We support Bayer sensors and YUV sensors. For Bayer sensors, it goes through ISP and output format is YUV420. For YUV sensors, it is YUV422 such as UYVY, YUYV.

Hi,

Our product uses external camera direct connected with TX2 SOC. It is the MIPI CSI-2 camera through ISP.

  1. So I want to know Whether our camera sensor go through Tegra drivers or V4L2 device driver?
    I am confused, because these are two separate paths on the camera architecture stack graph.

  2. What video input format for Bayer sensors before it goes through ISP?

  3. Where can I refer docs or some information about the V4L Mediacontroller Framework?

Thanks.

hello arknights,

Q1.
camera software stack support two paths for access camera sensor,

  1. you could access with v4l2 standard control to bypass [Camera Core] block for raw capture.
  2. you could access with nvarguscamerasrc to go through ISP for YUV results.

Q2.
ISP support several bayer format types, suggest you access Tegra X2 (Parker Series SoC) Technical Reference Manual.
you should check [VIDEO INPUT (VI)] chapter and refer to [Input Data Formats] session for support formats.

Q3
you might refer to Sensor Software Driver Programming Guide,
please also check below kernel sources for details,
for example,
$l4t-r32.2/public_sources/kernel/nvidia/drivers/media/platform/tegra/camera/vi/channel.c
$l4t-r32.2/public_sources/kernel/nvidia/drivers/media/platform/tegra/camera/vi/vi4_fops.c

Hi,
Sorry for late reply.

I refer the Tegra X2 (Parker Series SoC) Technical Reference Manual, I know about CSI and VI.

  1. But I can’t find the detail of the ISP, where can I find the detail about what the ISP4 working?

  2. And the path of the streaming go through ISP for YUV results whether is :
    NVCSI(MIPI) → VI4 → ISP4 → Tegra Drivers → Camera Core → libargus → nvarguscamerasrc → GStreamer Application ?

  3. The streaming bypass [Camera Core] block for raw capture whether is :
    NVCSI(MIPI) → VI4 → EMC → V4L2 Device Driver → V4L Mediacontroller Framework → V4L2src → GStreamer Application ?

Thanks.

hello arknights,

sorry, we don’t public the details about ISP processing.
I’m not pretty sure what’s EMC you mentioned here,
if it’s related to memory controls then both of your understanding of streaming pipeline were correct.
thanks

Hi,

Okay, I know.
If permitted, can you tell me what the ISP do probably or its main function?
It is related to memory controls, and I don’t know why it be called EMC too, I just found it at the bottom [27.5 Architectural Overview] [Figure 187: VI4 Top-Level Block Diagram] from the Tegra X2 (Parker Series SoC) Technical Reference Manual.

Thanks.

Hi,

When I use Thirt Party’s CSI camera, the VI, ISP and Driver are the Thirt Party.
When I use “Direct V4L2 Interface”, could I get the YUV result through ISP?
If yes, what should I do or what docs for this way should I read?
If no, how can I get the YUV result?

Thanks.

hello arknights,

you’re still working with JetPack release drivers (i.e. CSI, VI) no matter sensor types.
please also check Camera Architecture Stack, you’ll also need ISP support if the [Camera Core] is involved.

suggest you could also access Tutorials page, to expand the [Developer Tools] session and check the [Develop a V4L2 Sensor Driver] training video for the overview of camera software architecture.

to clarify,
if you’re working with CSI bayer sensor, you’ll need de-bayer process.

  1. if you’re going through “Direct V4L2 Interface”, VI driver handle the de-bayer process. you might also have raw dump with v4l2 standard controls. here’s sample commands for your reference.
$v4l2-ctl -d /dev/video0 --set-fmt-video=width=2592,height=1944,pixelformat=RG10 --set-ctrl bypass_mode=0 --stream-mmap --stream-count=1 --stream-to=test.raw
  1. if you’re going through [Camera Core], then ISP handle the de-bayer process. the hardware encode/decode components were also necessary in the same pipeline as ISP outputs.
    for example,
$ gst-launch-1.0 nvarguscamerasrc num-buffers=1 ! 'video/x-raw(memory:NVMM), width=2592, height=1944' ! nvjpegenc ! filesink location=sample.jpg

Hi,

Okay, I will watch the training video.

Thanks!

Hi,

  1. In the hardware layer of the Camera Architecture Stack, I know the video input is ‘VI’ , so what is the functions or effects about ‘Aperture’ and ‘Sensor’?

  2. I find that ‘VI’ can respectively connect to ‘V4L2 Device Driver’ and ‘Tegra Driver’. So I want to know meaning whether is choosing different way according the sensor type.

  3. After watching the [Develop a V4L2 Sensor Driver] training video, I want to know the driver I developed belong to which kinds of drivers(‘V4L2 Device Driver’ or ‘Tegra Driver’)?
    And what is the difference about ‘V4L2 Device Driver’ and ‘Tegra Driver’? I think they have the same function when I don’t want to passby ‘Camera Core’.

Thanks.

hello arknights,

  1. aperture to determine the amount of luminance as sensor input, sensor readout and transmit MIPI signaling as video input.
    you may contact with sensor vendor if the sensor module could adjust the aperture values.

  2. you may also access L4T Sources via Jetson Download Center,
    please check below two kernel sources to understand “Tegra Driver”,
    $l4t-r32.2/public_sources/kernel/nvidia/drivers/media/platform/tegra/camera/vi/channel.c
    $l4t-r32.2/public_sources/kernel/nvidia/drivers/media/platform/tegra/camera/vi/vi4_fops.c

you should consider “V4L2 Device Driver” as sensor driver,
please also check below for reference sensor drivers.
$l4t-r32.2/public_sources/kernel/nvidia/drivers/media/i2c/*

  1. following above, “Tegra Driver” is one framework for handling “V4L2 Device Driver”.
    according to Camera Architecture Stack, we could also separate two approaches to access camera sensor,
    (a) going through [Camera Core] or (b) accessing V4L media controller framework.
    (a) sensor streaming going through [Camera Core] would also enable ISP support. you could enable nvarguscamerasrc to test the pipeline if you’re working with bayer sensors.
    (b) please launching with v4l2src, and VI driver handle the de-bayer process.

  2. please access Tutorials page and expand [Developer Tools] session, you may check [V4L2 Sensor Driver Development Tutorial] training video for the deep dive of sensor driver implementation.
    thanks

Hi,
Thanks, Jerry.
I know the architecture about the streaming and the development of sensor driver now.

Hi JerryChang,

I want to know whether we can use the sensor we chose, work with NVIDIA ISP?
Is it possible?

Thanks.

hello arknights,

please contact with Jetson Preferred Partners for camera solutions.
thanks

Hi,

I will contact with Jetson Preferred Partners.

Thanks.